PRED18: Dataset and Further Experiments with DAVIS Event Camera in Predator-Prey Robot Chasing

Diederik Paul Moeys, Daniel Neil, Federico Corrode, Emmett Kerr, Philip Vance, Gautham Das, Sonya Coleman, T.Martin McGinnity, Dermot Kerr, Tobi Delbruck

Research output: Contribution to conferencePaper

Abstract

Machine vision systems using convolutional neural
networks (CNNs) for robotic applications are increasingly being
developed. Conventional vision CNNs are driven by camera
frames at constant sample rate, thus achieving a fixed latency and
power consumption tradeoff. This paper describes further work
on the first experiments of a closed-loop robotic system integrating
a CNN together with a Dynamic and Active Pixel Vision Sensor
(DAVIS) in a predator/prey scenario. The DAVIS, mounted on the
predator Summit XL robot, produces frames at a fixed 15 Hz
frame-rate and Dynamic Vision Sensor (DVS) histograms
containing 5k ON and OFF events at a variable frame-rate ranging
from 15-500 Hz depending on the robot speeds. In contrast to
conventional frame-based systems, the latency and processing cost
depends on the rate of change of the image. The CNN is trained
offline on the 1.25h labeled dataset to recognize the position and
size of the prey robot, in the field of view of the predator. During
inference, combining the ten output classes of the CNN allows
extracting the analog position vector of the prey relative to the
predator with a mean 8.7% error in angular estimation. The
system is compatible with conventional deep learning technology,
but achieves a variable latency-power tradeoff that adapts
automatically to the dynamics. Finally, investigations on the
robustness of the algorithm, a human performance comparison
and a deconvolution analysis are also explored.

Conference

Conference4th International Conference on Event-Based Control, Communication and Signal Processing
Abbreviated titleEBCCSP 2018
CountryFrance
CityPerpignan
Period27/06/1829/06/18
Internet address

Fingerprint

Pixels
Cameras
Robots
Sensors
Robotics
Experiments
Deconvolution
Computer vision
Processing

Cite this

Moeys, D. P., Neil, D., Corrode, F., Kerr, E., Vance, P., Das, G., ... Delbruck, T. (Accepted/In press). PRED18: Dataset and Further Experiments with DAVIS Event Camera in Predator-Prey Robot Chasing. Paper presented at 4th International Conference on Event-Based Control, Communication and Signal Processing, Perpignan, France.
Moeys, Diederik Paul ; Neil, Daniel ; Corrode, Federico ; Kerr, Emmett ; Vance, Philip ; Das, Gautham ; Coleman, Sonya ; McGinnity, T.Martin ; Kerr, Dermot ; Delbruck, Tobi. / PRED18: Dataset and Further Experiments with DAVIS Event Camera in Predator-Prey Robot Chasing. Paper presented at 4th International Conference on Event-Based Control, Communication and Signal Processing, Perpignan, France.8 p.
@conference{143a85d6e1dd437a8423c2216994dec1,
title = "PRED18: Dataset and Further Experiments with DAVIS Event Camera in Predator-Prey Robot Chasing",
abstract = "Machine vision systems using convolutional neuralnetworks (CNNs) for robotic applications are increasingly beingdeveloped. Conventional vision CNNs are driven by cameraframes at constant sample rate, thus achieving a fixed latency andpower consumption tradeoff. This paper describes further workon the first experiments of a closed-loop robotic system integratinga CNN together with a Dynamic and Active Pixel Vision Sensor(DAVIS) in a predator/prey scenario. The DAVIS, mounted on thepredator Summit XL robot, produces frames at a fixed 15 Hzframe-rate and Dynamic Vision Sensor (DVS) histogramscontaining 5k ON and OFF events at a variable frame-rate rangingfrom 15-500 Hz depending on the robot speeds. In contrast toconventional frame-based systems, the latency and processing costdepends on the rate of change of the image. The CNN is trainedoffline on the 1.25h labeled dataset to recognize the position andsize of the prey robot, in the field of view of the predator. Duringinference, combining the ten output classes of the CNN allowsextracting the analog position vector of the prey relative to thepredator with a mean 8.7{\%} error in angular estimation. Thesystem is compatible with conventional deep learning technology,but achieves a variable latency-power tradeoff that adaptsautomatically to the dynamics. Finally, investigations on therobustness of the algorithm, a human performance comparisonand a deconvolution analysis are also explored.",
author = "Moeys, {Diederik Paul} and Daniel Neil and Federico Corrode and Emmett Kerr and Philip Vance and Gautham Das and Sonya Coleman and T.Martin McGinnity and Dermot Kerr and Tobi Delbruck",
year = "2018",
month = "5",
day = "19",
language = "English",
note = "4th International Conference on Event-Based Control, Communication and Signal Processing, EBCCSP 2018 ; Conference date: 27-06-2018 Through 29-06-2018",
url = "https://www.ebccsp2018.org",

}

Moeys, DP, Neil, D, Corrode, F, Kerr, E, Vance, P, Das, G, Coleman, S, McGinnity, TM, Kerr, D & Delbruck, T 2018, 'PRED18: Dataset and Further Experiments with DAVIS Event Camera in Predator-Prey Robot Chasing' Paper presented at 4th International Conference on Event-Based Control, Communication and Signal Processing, Perpignan, France, 27/06/18 - 29/06/18, .

PRED18: Dataset and Further Experiments with DAVIS Event Camera in Predator-Prey Robot Chasing. / Moeys, Diederik Paul; Neil, Daniel; Corrode, Federico; Kerr, Emmett; Vance, Philip; Das, Gautham; Coleman, Sonya; McGinnity, T.Martin; Kerr, Dermot; Delbruck, Tobi.

2018. Paper presented at 4th International Conference on Event-Based Control, Communication and Signal Processing, Perpignan, France.

Research output: Contribution to conferencePaper

TY - CONF

T1 - PRED18: Dataset and Further Experiments with DAVIS Event Camera in Predator-Prey Robot Chasing

AU - Moeys, Diederik Paul

AU - Neil, Daniel

AU - Corrode, Federico

AU - Kerr, Emmett

AU - Vance, Philip

AU - Das, Gautham

AU - Coleman, Sonya

AU - McGinnity, T.Martin

AU - Kerr, Dermot

AU - Delbruck, Tobi

PY - 2018/5/19

Y1 - 2018/5/19

N2 - Machine vision systems using convolutional neuralnetworks (CNNs) for robotic applications are increasingly beingdeveloped. Conventional vision CNNs are driven by cameraframes at constant sample rate, thus achieving a fixed latency andpower consumption tradeoff. This paper describes further workon the first experiments of a closed-loop robotic system integratinga CNN together with a Dynamic and Active Pixel Vision Sensor(DAVIS) in a predator/prey scenario. The DAVIS, mounted on thepredator Summit XL robot, produces frames at a fixed 15 Hzframe-rate and Dynamic Vision Sensor (DVS) histogramscontaining 5k ON and OFF events at a variable frame-rate rangingfrom 15-500 Hz depending on the robot speeds. In contrast toconventional frame-based systems, the latency and processing costdepends on the rate of change of the image. The CNN is trainedoffline on the 1.25h labeled dataset to recognize the position andsize of the prey robot, in the field of view of the predator. Duringinference, combining the ten output classes of the CNN allowsextracting the analog position vector of the prey relative to thepredator with a mean 8.7% error in angular estimation. Thesystem is compatible with conventional deep learning technology,but achieves a variable latency-power tradeoff that adaptsautomatically to the dynamics. Finally, investigations on therobustness of the algorithm, a human performance comparisonand a deconvolution analysis are also explored.

AB - Machine vision systems using convolutional neuralnetworks (CNNs) for robotic applications are increasingly beingdeveloped. Conventional vision CNNs are driven by cameraframes at constant sample rate, thus achieving a fixed latency andpower consumption tradeoff. This paper describes further workon the first experiments of a closed-loop robotic system integratinga CNN together with a Dynamic and Active Pixel Vision Sensor(DAVIS) in a predator/prey scenario. The DAVIS, mounted on thepredator Summit XL robot, produces frames at a fixed 15 Hzframe-rate and Dynamic Vision Sensor (DVS) histogramscontaining 5k ON and OFF events at a variable frame-rate rangingfrom 15-500 Hz depending on the robot speeds. In contrast toconventional frame-based systems, the latency and processing costdepends on the rate of change of the image. The CNN is trainedoffline on the 1.25h labeled dataset to recognize the position andsize of the prey robot, in the field of view of the predator. Duringinference, combining the ten output classes of the CNN allowsextracting the analog position vector of the prey relative to thepredator with a mean 8.7% error in angular estimation. Thesystem is compatible with conventional deep learning technology,but achieves a variable latency-power tradeoff that adaptsautomatically to the dynamics. Finally, investigations on therobustness of the algorithm, a human performance comparisonand a deconvolution analysis are also explored.

M3 - Paper

ER -

Moeys DP, Neil D, Corrode F, Kerr E, Vance P, Das G et al. PRED18: Dataset and Further Experiments with DAVIS Event Camera in Predator-Prey Robot Chasing. 2018. Paper presented at 4th International Conference on Event-Based Control, Communication and Signal Processing, Perpignan, France.