Learning bio-inspired movement in highly noisy environments

  • James Gillespie

Student thesis: Doctoral Thesis

Abstract

Because animals are extremely effective at moving in their natural environments, which are often highly dynamic and noisy, they provide the ideal model to draw inspiration to implement robust robotic movement and navigation. Although experimentally shown to work, bio-robotics research lacks a formal theoretical underpinning illustrated by the Braitenberg vehicle model of insect taxis, which relies on noise-free assumptions. This limitation restricts its applicability in real-world scenarios characterised by sensor noise and uncertainty.

To address this challenge, there is a growing interest in developing more robust and adaptive robotic movement and navigation strategies using learning techniques. Reinforcement Learning (RL), in particular, has shown promise in enabling robots to learn from experience and adapt their behaviour to changing conditions, even when their sensors provide uncertain information. However, despite the usefulness of bioinspired steering models and the success of RL in various applications, the application of Reinforcement Learning for the control of Braitenberg vehicles is an area which has yet to be explored.

In this thesis, we present the first application of Reinforcement Learning for the control of target-seeking Braitenberg vehicles in highly stochastic scenarios, in turn producing robust steering controllers for source-seeking robots. Specifically, this thesis makes three contributions. Firstly, an RL framework is developed and tested for learning the control of deterministic Braitenberg Vehicles, and an analytical condition is produced to evaluate the learning outcomes. Results show that the developed framework finds functions fulfilling the derived theoretical conditions of Braitenberg Vehicle 3a without imposing the desired output on the learning, while the new analytical condition presented allows for the estimation of the basins of attraction boundary. Secondly, the developed Reinforcement Learning framework is used to learn robust steering controllers for the Braitenberg Vehicle 3a while under high levels of noise. RL control is shown to achieve 93% trajectory success rate in some simulations under the highest noise levels, representing a 46% improvement over empirical control strategies. Finally, the transferability of the developed Reinforcement Learning controller to real world scenarios is demonstrated by testing it on a physical autonomous mobile robotic platform. Results show that robot control via the trained Reinforcement Learning policy can improve trajectory success by 32.5% compared to the empirical controller, while maintaining equivalent trajectory lengths.

Thesis is embargoed until 31st March 2026
Date of AwardMar 2024
Original languageEnglish
SupervisorJose Santos (Supervisor) & Nazmul Siddique (Supervisor)

Keywords

  • Braitenberg vehicles
  • reinforcement learning
  • autonomous vehicles
  • model-free learning
  • dynamical systems
  • drift-diffusion models
  • stochastic processes
  • Monte Carlo methods

Cite this

'