Abstract
Neural Network (CNN) in the context of a predator/prey scenario. The CNN is trained and run on data from a Dynamic and Active Pixel Sensor (DAVIS) mounted on a Summit XL robot (the predator), which follows another one (the prey). The CNN is driven by both conventional image frames and dynamic vision sensor "frames" that consist of a constant number of DAVIS ON and OFF events. The network is thus "data driven" at a sample rate proportional to the scene activity, so the effective sample rate varies from 15 Hz to 240 Hz depending on the robot speeds. The network generates four outputs: steer right, left, center and non-visible. After off-line training on labeled data, the network is imported on the on-board Summit XL robot which runs jAER and receives steering directions in real time. Successful results on closed-loop trials, with accuracies up to 87% or 92% (depending on evaluation criteria) are reported. Although the proposed approach discards the precise DAVIS event timing, it offers the significant advantage of compatibility with conventional deep learning technology without giving up the advantage of datadriven computing.
| Original language | English |
|---|---|
| Title of host publication | Unknown Host Publication |
| Publisher | IEEE |
| Number of pages | 8 |
| ISBN (Print) | 978-1-5090-4196-1 |
| DOIs | |
| Publication status | Published online - 24 Oct 2016 |
| Event | Second International Conference on Event-Based Control, Communication, and Signal Processing - Krackow, Poland Duration: 24 Oct 2016 → … |
Conference
| Conference | Second International Conference on Event-Based Control, Communication, and Signal Processing |
|---|---|
| Period | 24/10/16 → … |
UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 9 Industry, Innovation, and Infrastructure
Keywords
- Convolutional Neural Network
- Artificial Retina
- Robotics
Fingerprint
Dive into the research topics of 'Steering a Predator Robot using a Mixed Frame/Event-Driven Convolutional Neural Network'. Together they form a unique fingerprint.Student theses
-
Tactile sensing for assistive robotics
Kerr, E. (Author), Mc Ginnity, T. (Supervisor) & Coleman, S. (Supervisor), May 2018Student thesis: Doctoral Thesis
File
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver