Abstract
—In the case of long-term changing environment, longterm visual localization is a challenging problem in autonomous
driving and mobile robots. Due to the influence of season, illumination and other changing weather conditions, the traditional
image retrieval methods are difficult to achieve ideal results in
long-term visual localization. Therefore, inspired by the human
brain associative recognition function, an image retrieval based
on a multi-domain association-guided network is proposed to
solve the long-term visual localization problem. The key idea
is to extract the discriminative domain-invariant features in
different scenes through multi-domain image transformation of
the perceptual network and the conceptual network. In addition,
in order to better associate image features of different scenes
in the conceptual network and guide the perceptual network to
obtain more robust domain invariant features, an associationguided module is designed without the need of external datasets.
On this basis, the domain feature loss function and the guidance
mechanism of the loss function are introduced to assist these two
network models training to obtain better performance. Finally,
experiments are carried out on the CMU-Seasons dataset and
the RobotCar-Seasons dataset. Compared with some state-of-theart methods, the proposed method improved the high-precision
localization result of urban, suburban and park scenes in the
CMU-Seasons dataset by 1.5%, 0.5% and 0.7%, respectively,
which also can verify the effectiveness of the proposed method
under various seasonal and illumination conditions.
driving and mobile robots. Due to the influence of season, illumination and other changing weather conditions, the traditional
image retrieval methods are difficult to achieve ideal results in
long-term visual localization. Therefore, inspired by the human
brain associative recognition function, an image retrieval based
on a multi-domain association-guided network is proposed to
solve the long-term visual localization problem. The key idea
is to extract the discriminative domain-invariant features in
different scenes through multi-domain image transformation of
the perceptual network and the conceptual network. In addition,
in order to better associate image features of different scenes
in the conceptual network and guide the perceptual network to
obtain more robust domain invariant features, an associationguided module is designed without the need of external datasets.
On this basis, the domain feature loss function and the guidance
mechanism of the loss function are introduced to assist these two
network models training to obtain better performance. Finally,
experiments are carried out on the CMU-Seasons dataset and
the RobotCar-Seasons dataset. Compared with some state-of-theart methods, the proposed method improved the high-precision
localization result of urban, suburban and park scenes in the
CMU-Seasons dataset by 1.5%, 0.5% and 0.7%, respectively,
which also can verify the effectiveness of the proposed method
under various seasonal and illumination conditions.
Original language | English |
---|---|
Pages (from-to) | 1-14 |
Number of pages | 14 |
Journal | IEEE Transactions on Cognitive and Developmental Systems |
Early online date | 3 Jan 2025 |
DOIs | |
Publication status | Published online - 3 Jan 2025 |
Bibliographical note
Publisher Copyright:© 2016 IEEE.
Keywords
- Visual localization
- image retrieval
- association guided
- changing environment
- Location awareness
- Visualization
- Feature extraction
- Translation
- Image retrieval
- Training
- Transfer learning
- Simultaneous localization and mapping
- Image recognition
- Mobile robots
- associationguided