Steering a Predator Robot using a Mixed Frame/Event-Driven Convolutional Neural Network

Diederik Moeys, Corradi Federico, Emmett Kerr, Philip Vance, Gautham Das, Neil Daneil, Dermot Kerr, Tobi Delbruck

Research output: Chapter in Book/Report/Conference proceedingConference contribution

29 Citations (Scopus)

Abstract

Neural Network (CNN) in the context of a predator/prey scenario. The CNN is trained and run on data from a Dynamic and Active Pixel Sensor (DAVIS) mounted on a Summit XL robot (the predator), which follows another one (the prey). The CNN is driven by both conventional image frames and dynamic vision sensor "frames" that consist of a constant number of DAVIS ON and OFF events. The network is thus "data driven" at a sample rate proportional to the scene activity, so the effective sample rate varies from 15 Hz to 240 Hz depending on the robot speeds. The network generates four outputs: steer right, left, center and non-visible. After off-line training on labeled data, the network is imported on the on-board Summit XL robot which runs jAER and receives steering directions in real time. Successful results on closed-loop trials, with accuracies up to 87% or 92% (depending on evaluation criteria) are reported. Although the proposed approach discards the precise DAVIS event timing, it offers the significant advantage of compatibility with conventional deep learning technology without giving up the advantage of datadriven computing.
LanguageEnglish
Title of host publicationUnknown Host Publication
Number of pages8
DOIs
Publication statusE-pub ahead of print - 24 Oct 2016
EventSecond International Conference on Event-Based Control, Communication, and Signal Processing - Krackow, Poland
Duration: 24 Oct 2016 → …

Conference

ConferenceSecond International Conference on Event-Based Control, Communication, and Signal Processing
Period24/10/16 → …

Fingerprint

Robots
Neural networks
Pixels
Sensors
Deep learning

Keywords

  • Convolutional Neural Network
  • Artificial Retina
  • Robotics

Cite this

Moeys, D., Federico, C., Kerr, E., Vance, P., Das, G., Daneil, N., ... Delbruck, T. (2016). Steering a Predator Robot using a Mixed Frame/Event-Driven Convolutional Neural Network. In Unknown Host Publication https://doi.org/10.1109/EBCCSP.2016.7605233
Moeys, Diederik ; Federico, Corradi ; Kerr, Emmett ; Vance, Philip ; Das, Gautham ; Daneil, Neil ; Kerr, Dermot ; Delbruck, Tobi. / Steering a Predator Robot using a Mixed Frame/Event-Driven Convolutional Neural Network. Unknown Host Publication. 2016.
@inproceedings{e28e7730551248f0ba079768413306cb,
title = "Steering a Predator Robot using a Mixed Frame/Event-Driven Convolutional Neural Network",
abstract = "Neural Network (CNN) in the context of a predator/prey scenario. The CNN is trained and run on data from a Dynamic and Active Pixel Sensor (DAVIS) mounted on a Summit XL robot (the predator), which follows another one (the prey). The CNN is driven by both conventional image frames and dynamic vision sensor {"}frames{"} that consist of a constant number of DAVIS ON and OFF events. The network is thus {"}data driven{"} at a sample rate proportional to the scene activity, so the effective sample rate varies from 15 Hz to 240 Hz depending on the robot speeds. The network generates four outputs: steer right, left, center and non-visible. After off-line training on labeled data, the network is imported on the on-board Summit XL robot which runs jAER and receives steering directions in real time. Successful results on closed-loop trials, with accuracies up to 87{\%} or 92{\%} (depending on evaluation criteria) are reported. Although the proposed approach discards the precise DAVIS event timing, it offers the significant advantage of compatibility with conventional deep learning technology without giving up the advantage of datadriven computing.",
keywords = "Convolutional Neural Network, Artificial Retina, Robotics",
author = "Diederik Moeys and Corradi Federico and Emmett Kerr and Philip Vance and Gautham Das and Neil Daneil and Dermot Kerr and Tobi Delbruck",
year = "2016",
month = "10",
day = "24",
doi = "10.1109/EBCCSP.2016.7605233",
language = "English",
isbn = "978-1-5090-4196-1",
booktitle = "Unknown Host Publication",

}

Moeys, D, Federico, C, Kerr, E, Vance, P, Das, G, Daneil, N, Kerr, D & Delbruck, T 2016, Steering a Predator Robot using a Mixed Frame/Event-Driven Convolutional Neural Network. in Unknown Host Publication. Second International Conference on Event-Based Control, Communication, and Signal Processing, 24/10/16. https://doi.org/10.1109/EBCCSP.2016.7605233

Steering a Predator Robot using a Mixed Frame/Event-Driven Convolutional Neural Network. / Moeys, Diederik; Federico, Corradi; Kerr, Emmett; Vance, Philip; Das, Gautham; Daneil, Neil; Kerr, Dermot; Delbruck, Tobi.

Unknown Host Publication. 2016.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Steering a Predator Robot using a Mixed Frame/Event-Driven Convolutional Neural Network

AU - Moeys, Diederik

AU - Federico, Corradi

AU - Kerr, Emmett

AU - Vance, Philip

AU - Das, Gautham

AU - Daneil, Neil

AU - Kerr, Dermot

AU - Delbruck, Tobi

PY - 2016/10/24

Y1 - 2016/10/24

N2 - Neural Network (CNN) in the context of a predator/prey scenario. The CNN is trained and run on data from a Dynamic and Active Pixel Sensor (DAVIS) mounted on a Summit XL robot (the predator), which follows another one (the prey). The CNN is driven by both conventional image frames and dynamic vision sensor "frames" that consist of a constant number of DAVIS ON and OFF events. The network is thus "data driven" at a sample rate proportional to the scene activity, so the effective sample rate varies from 15 Hz to 240 Hz depending on the robot speeds. The network generates four outputs: steer right, left, center and non-visible. After off-line training on labeled data, the network is imported on the on-board Summit XL robot which runs jAER and receives steering directions in real time. Successful results on closed-loop trials, with accuracies up to 87% or 92% (depending on evaluation criteria) are reported. Although the proposed approach discards the precise DAVIS event timing, it offers the significant advantage of compatibility with conventional deep learning technology without giving up the advantage of datadriven computing.

AB - Neural Network (CNN) in the context of a predator/prey scenario. The CNN is trained and run on data from a Dynamic and Active Pixel Sensor (DAVIS) mounted on a Summit XL robot (the predator), which follows another one (the prey). The CNN is driven by both conventional image frames and dynamic vision sensor "frames" that consist of a constant number of DAVIS ON and OFF events. The network is thus "data driven" at a sample rate proportional to the scene activity, so the effective sample rate varies from 15 Hz to 240 Hz depending on the robot speeds. The network generates four outputs: steer right, left, center and non-visible. After off-line training on labeled data, the network is imported on the on-board Summit XL robot which runs jAER and receives steering directions in real time. Successful results on closed-loop trials, with accuracies up to 87% or 92% (depending on evaluation criteria) are reported. Although the proposed approach discards the precise DAVIS event timing, it offers the significant advantage of compatibility with conventional deep learning technology without giving up the advantage of datadriven computing.

KW - Convolutional Neural Network

KW - Artificial Retina

KW - Robotics

UR - http://ebccsp2016.org/

UR - https://arxiv.org/abs/1606.09433v1

UR - http://ebccsp2016.org/

UR - https://arxiv.org/abs/1606.09433v1

U2 - 10.1109/EBCCSP.2016.7605233

DO - 10.1109/EBCCSP.2016.7605233

M3 - Conference contribution

SN - 978-1-5090-4196-1

BT - Unknown Host Publication

ER -