SAMLoc: Structure-Aware Constraints With Multi-Task Distillation for Long-Term Visual Localization

Jian Nign, Yunzhou Zhang, Xinge Zhao, Sonya Coleman, Dermot Kerr

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review


Real-time and robust long-term visual localization is a key technology for autonomous driving. Season and illumination variance, as well as limited computing power make this problem more challenging. At present, most of the excellent visual localization algorithms cannot run in real-time on devices with limited computing power. In this paper, we propose SAMLoc, a self-supervised 6-DoF visual localization method with structure-aware and multi-task distillation. We integrate the structure-aware constraints into the hierarchical localization network of multi-task distillation, which greatly reduces the feature extraction time while ensuring localization accuracy, thus achieving real-time and robust large-scene localization on mobile devices. Our method takes both speed and accuracy into consideration, and extensive experiments have been conducted to validate the effectiveness of the proposed approach on several datasets. Our network is not only lightweight but also has excellent generalization ability, and still exhibits high localization accuracy even with challenging datasets.
Original languageEnglish
Title of host publicationProceedings of International Conference on Robotics and Automation 2023
Publication statusAccepted/In press - 17 Jan 2023
EventInternational Conference on Robotics and Automation 2023 - London, England, London, United Kingdom
Duration: 29 May 20232 Jun 2023


ConferenceInternational Conference on Robotics and Automation 2023
Abbreviated titleICRA 2023
Country/TerritoryUnited Kingdom
Internet address


  • Visual localization
  • Light-weight networks
  • Generalisation


Dive into the research topics of 'SAMLoc: Structure-Aware Constraints With Multi-Task Distillation for Long-Term Visual Localization'. Together they form a unique fingerprint.

Cite this