Abstract
Scalability is a key challenge for digital spiking neural networks (SNN) in hardware. This paper proposes an efficient neuron architecture (ENA) to reduce the silicon area occupied by neurons. As the computation resource (e.g. DSP in FPGAs) is limited for hardware SNNs, the proposed ENA employs a sharing mechanism of computing component at two levels (synapse and neuron) to reduce the occupied resources. The neuron computing core is developed as the key component for the neuron model computation, which is shared by multiple synapses within one neuron cell; and also the computing component of one neuron is shared by several neurons within one layer of the SNN system. A test bench experiment is designed for a Xilinx FPGA device and the results demonstrate that the proposed ENA occupies relatively low hardware resources and has the capability to scale for large SNN implementations.
Original language | English |
---|---|
Title of host publication | Unknown Host Publication |
Publisher | Institution of Engineering and Technology |
Pages | 1-6 |
Number of pages | 6 |
DOIs | |
Publication status | Accepted/In press - 5 May 2016 |
Event | 27th Irish Signals and Systems Conference (ISSC) - Duration: 5 May 2016 → … |
Conference
Conference | 27th Irish Signals and Systems Conference (ISSC) |
---|---|
Period | 5/05/16 → … |
Keywords
- field programmable gate arrays
- neural nets
- ENA
- FPGA-based spiking neural networks
- Xilinx FPGA device
- computing component sharing mechanism
- digital SNN
- digital spiking neural networks
- efficient neuron architecture
- hardware SNNs
- neuron computing core
- neuron model computation
- silicon area reduction
- Computational modeling
- Computer architecture
- Hardware
- Integrated circuit modeling
- Neurons
- Random access memory
- Topology
- FPGA implementation
- efficient design
- sharing mechanism
- spiking neural networks