Abstract
Braitenberg vehicles serve as bio-inspired controllers for sensor-based local navigation in wheeled robots, finding applications in various real-world scenarios. Tuning the parameters of these controllers involves finding nonlinear functions typically implemented through neural networks that link sensing to motor actions. However, tuning the weights to achieve the desired closed-loop navigation behaviours poses significant challenges. Some approaches use hand tuned spiking or recurrent neural networks, while others learn the weights using evolutionary approaches. Recently, Reinforcement Learning has been successfully used to learn neural controllers for Braitenberg vehicle 3a, a bio-inspired model of target seeking in simulated scenarios with high noise levels. This paper extends the application of RL for Braitenberg Vehicle control to a real-world robot platform, introducing real sensor noise and testing the adaptability of the RL framework in attenuating for this uncertainty. Comparative analyses are drawn between the neural controller acquired through RL and a simplistic hand-tuned counterpart using the Colias micro-robot as an evaluation tool. Results are illustrated through analysis of the real robot trajectories, where the RL-based neural controller exhibits a 32.5% increase in successful trajectories compared to an empirical hand-tuned controller.
Original language | English |
---|---|
Pages | 1-8 |
Number of pages | 8 |
DOIs | |
Publication status | Published online - 20 Mar 2024 |
Event | 31st Irish Conference on Artificial Intelligence and Cognitive Science (AICS) - Atlantic Technological University, Letterkenny, Donegal, Ireland Duration: 7 Dec 2023 → 8 Dec 2023 https://www.aics.ie/ |
Conference
Conference | 31st Irish Conference on Artificial Intelligence and Cognitive Science (AICS) |
---|---|
Abbreviated title | AICS |
Country/Territory | Ireland |
City | Letterkenny, Donegal |
Period | 7/12/23 → 8/12/23 |
Internet address |
Bibliographical note
Publisher Copyright:© 2023 IEEE.
Keywords
- Braitenberg vehicles
- reinforcement learning
- stochastic processes
- navigation
- uncertainty