Vital information matching in vision-and-language navigation

Zixi Jia, Kai Yu, Jingyu Ru, Sikai Yang, Sonya Coleman

Research output: Contribution to journalArticlepeer-review

65 Downloads (Pure)

Abstract

With the rapid development of artificial intelligence technology, many researchers have begun to focus on visual language navigation, which is one of the most important tasks in multi-modal machine learning. The focus of this multi-modal field is how to fuse multiple inputs, which is crucial for the integrated feedback of intrinsic information. However, the existing models are only implemented through simple data augmentation or expansion, and are obviously far from being able to tap the intrinsic relationship between modalities. In this paper, to overcome these challenges, a novel multi-modal matching feedback self-tuning model is proposed, which is a novel neural network called Vital Information Matching Feedback Self-tuning Network (VIM-Net). Our VIM-Net network is mainly composed of two matching feedback modules, a visual matching feedback module (V-mat) and a trajectory matching feedback module (T-mat). Specifically, V-mat matches the target information of visual recognition with the entity information extracted by the command; T-mat matches the serialized trajectory feature with the direction of movement of the command. Ablation experiments and comparative experiments are conducted on the proposed model using the Matterport3D simulator and the Room-to-Room (R2R) benchmark datasets, and the final navigation effect is shown in detail. The results prove that the model proposed in this paper is indeed effective on the task.
Original languageEnglish
Article number1035921
JournalFrontiers in Neurorobotics
Volume16
Early online date17 Nov 2022
DOIs
Publication statusPublished online - 17 Nov 2022

Bibliographical note

Funding Information:
This research was funded by the National Natural Science Foundation of China (61872073), the Fundamental Research Funds for the Central Universities (N2126005 and N2126002), and the National Natural Science Foundation of Liaoning (2021-MS-101).

Publisher Copyright:
Copyright © 2022 Jia, Yu, Ru, Yang and Coleman.

Keywords

  • Microbiology
  • vision-and-language navigation
  • multimodal matching
  • self-tuning module
  • collaborative learning
  • vital information matching networks

Fingerprint

Dive into the research topics of 'Vital information matching in vision-and-language navigation'. Together they form a unique fingerprint.

Cite this