Efficient Appearance-Based Topological Mapping and Navigation with Omnidirectional Vision

Christopher Burbridge, Joan Condell

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Because of a mobile robot’s ability to move in itsenvironment, one of the most important and common tasks formobile robots is, arguably, the task of navigation. This has tobe carried out in practically every mobile robot, and it hasto be carried out quickly (real-time constraints) and cheaply(computational constraints). SLAM [7] is a reliable and veryaccurate method which is widely used now in mobile robotics.However, these algorithms are not computationally cheap. Inthis paper we therefore investigate a computationally efficientalgorithm for simultaneous mapping and navigation that issuitable for application in simple mobile robots. We apply selforganisingmaps in a novel way to image data compression andindexing of topological map nodes at the same time.Our proposed system builds a dense topological map using onlythe visual appearance of the environment, with no need for anyfeature extraction or matching. This is made possible throughthe novel use of a self-organising map and a relaxed attitudetowards loop closure and metric consistency. The inconsistenciesand uncertainties within the map are not considered duringmapping, but rather only during navigation at which point aBayesian approach is taken to allow accurate navigation. Thepaper concludes by presenting mobile robot experiments
Original languageEnglish
Title of host publicationUnknown Host Publication
PublisherUniversity of Plymouth
Pages26-33
Number of pages8
Publication statusPublished (in print/issue) - 1 Sept 2010
EventTAROS 2010 - Towards Autonomous Robotic Systems 2010 - University of Plymouth, Devon, UK
Duration: 1 Sept 2010 → …

Conference

ConferenceTAROS 2010 - Towards Autonomous Robotic Systems 2010
Period1/09/10 → …

Fingerprint

Dive into the research topics of 'Efficient Appearance-Based Topological Mapping and Navigation with Omnidirectional Vision'. Together they form a unique fingerprint.

Cite this