Abstract
Neural coding schemes are powerful tools used within neuroscience. This paper introduces three different neural coding scheme formations for event-based vision data which are designed to emulate the neural behaviour exhibited by neurons under stimuli. Presented are phase-of-firing and two sparse neural coding schemes. It is determined that machine learning approaches, i.e. Convolutional Neural Network combined with a Stacked Autoencoder network, produce powerful descriptors of the patterns within events. These coding schemes are deployed in an existing action recognition template and evaluated using two popular event-based data sets.
Original language | English |
---|---|
Title of host publication | 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Proceedings |
Pages | 2468-2472 |
Number of pages | 5 |
ISBN (Electronic) | 978-1-5090-6631-5 |
DOIs | |
Publication status | Published (in print/issue) - 4 May 2020 |
Event | 2020 IEEE International Conference on Acoustics, Speech and Signal Processing - Barcelona, Spain Duration: 4 May 2020 → 8 May 2020 https://2020.ieeeicassp.org |
Publication series
Name | ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings |
---|---|
Volume | 2020-May |
ISSN (Print) | 1520-6149 |
Conference
Conference | 2020 IEEE International Conference on Acoustics, Speech and Signal Processing |
---|---|
Abbreviated title | ICASSP |
Country/Territory | Spain |
City | Barcelona |
Period | 4/05/20 → 8/05/20 |
Internet address |
Bibliographical note
Publisher Copyright:© 2020 IEEE
Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.
Keywords
- event-based vision
- Convolutional Neural Network (CNN)
- encoding scheme
- Feature Extraction
- Object recognition
- Encoding scheme
- Convolutional neural network
- Feature extraction
- Event-based vision