Efficient neuron architecture for FPGA-based spiking neural networks

Lei Wan, Yuling Luo, Shuxiang Song, Jim Harkin, Junxiu Liu

Research output: Chapter in Book/Report/Conference proceedingConference contribution

7 Citations (Scopus)

Abstract

Scalability is a key challenge for digital spiking neural networks (SNN) in hardware. This paper proposes an efficient neuron architecture (ENA) to reduce the silicon area occupied by neurons. As the computation resource (e.g. DSP in FPGAs) is limited for hardware SNNs, the proposed ENA employs a sharing mechanism of computing component at two levels (synapse and neuron) to reduce the occupied resources. The neuron computing core is developed as the key component for the neuron model computation, which is shared by multiple synapses within one neuron cell; and also the computing component of one neuron is shared by several neurons within one layer of the SNN system. A test bench experiment is designed for a Xilinx FPGA device and the results demonstrate that the proposed ENA occupies relatively low hardware resources and has the capability to scale for large SNN implementations.
LanguageEnglish
Title of host publicationUnknown Host Publication
Pages1-6
Number of pages6
DOIs
Publication statusAccepted/In press - 5 May 2016
Event27th Irish Signals and Systems Conference (ISSC) -
Duration: 5 May 2016 → …

Conference

Conference27th Irish Signals and Systems Conference (ISSC)
Period5/05/16 → …

Fingerprint

Neurons
Field programmable gate arrays (FPGA)
Neural networks
Hardware
Scalability
Silicon

Keywords

  • field programmable gate arrays
  • neural nets
  • ENA
  • FPGA-based spiking neural networks
  • Xilinx FPGA device
  • computing component sharing mechanism
  • digital SNN
  • digital spiking neural networks
  • efficient neuron architecture
  • hardware SNNs
  • neuron computing core
  • neuron model computation
  • silicon area reduction
  • Computational modeling
  • Computer architecture
  • Hardware
  • Integrated circuit modeling
  • Neurons
  • Random access memory
  • Topology
  • FPGA implementation
  • efficient design
  • sharing mechanism
  • spiking neural networks

Cite this

Wan, L., Luo, Y., Song, S., Harkin, J., & Liu, J. (Accepted/In press). Efficient neuron architecture for FPGA-based spiking neural networks. In Unknown Host Publication (pp. 1-6) https://doi.org/10.1109/ISSC.2016.7528472
Wan, Lei ; Luo, Yuling ; Song, Shuxiang ; Harkin, Jim ; Liu, Junxiu. / Efficient neuron architecture for FPGA-based spiking neural networks. Unknown Host Publication. 2016. pp. 1-6
@inproceedings{34b8102bd16844e9a9182802987172fb,
title = "Efficient neuron architecture for FPGA-based spiking neural networks",
abstract = "Scalability is a key challenge for digital spiking neural networks (SNN) in hardware. This paper proposes an efficient neuron architecture (ENA) to reduce the silicon area occupied by neurons. As the computation resource (e.g. DSP in FPGAs) is limited for hardware SNNs, the proposed ENA employs a sharing mechanism of computing component at two levels (synapse and neuron) to reduce the occupied resources. The neuron computing core is developed as the key component for the neuron model computation, which is shared by multiple synapses within one neuron cell; and also the computing component of one neuron is shared by several neurons within one layer of the SNN system. A test bench experiment is designed for a Xilinx FPGA device and the results demonstrate that the proposed ENA occupies relatively low hardware resources and has the capability to scale for large SNN implementations.",
keywords = "field programmable gate arrays, neural nets, ENA, FPGA-based spiking neural networks, Xilinx FPGA device, computing component sharing mechanism, digital SNN, digital spiking neural networks, efficient neuron architecture, hardware SNNs, neuron computing core, neuron model computation, silicon area reduction, Computational modeling, Computer architecture, Hardware, Integrated circuit modeling, Neurons, Random access memory, Topology, FPGA implementation, efficient design, sharing mechanism, spiking neural networks",
author = "Lei Wan and Yuling Luo and Shuxiang Song and Jim Harkin and Junxiu Liu",
year = "2016",
month = "5",
day = "5",
doi = "10.1109/ISSC.2016.7528472",
language = "English",
pages = "1--6",
booktitle = "Unknown Host Publication",

}

Wan, L, Luo, Y, Song, S, Harkin, J & Liu, J 2016, Efficient neuron architecture for FPGA-based spiking neural networks. in Unknown Host Publication. pp. 1-6, 27th Irish Signals and Systems Conference (ISSC), 5/05/16. https://doi.org/10.1109/ISSC.2016.7528472

Efficient neuron architecture for FPGA-based spiking neural networks. / Wan, Lei; Luo, Yuling; Song, Shuxiang; Harkin, Jim; Liu, Junxiu.

Unknown Host Publication. 2016. p. 1-6.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

TY - GEN

T1 - Efficient neuron architecture for FPGA-based spiking neural networks

AU - Wan, Lei

AU - Luo, Yuling

AU - Song, Shuxiang

AU - Harkin, Jim

AU - Liu, Junxiu

PY - 2016/5/5

Y1 - 2016/5/5

N2 - Scalability is a key challenge for digital spiking neural networks (SNN) in hardware. This paper proposes an efficient neuron architecture (ENA) to reduce the silicon area occupied by neurons. As the computation resource (e.g. DSP in FPGAs) is limited for hardware SNNs, the proposed ENA employs a sharing mechanism of computing component at two levels (synapse and neuron) to reduce the occupied resources. The neuron computing core is developed as the key component for the neuron model computation, which is shared by multiple synapses within one neuron cell; and also the computing component of one neuron is shared by several neurons within one layer of the SNN system. A test bench experiment is designed for a Xilinx FPGA device and the results demonstrate that the proposed ENA occupies relatively low hardware resources and has the capability to scale for large SNN implementations.

AB - Scalability is a key challenge for digital spiking neural networks (SNN) in hardware. This paper proposes an efficient neuron architecture (ENA) to reduce the silicon area occupied by neurons. As the computation resource (e.g. DSP in FPGAs) is limited for hardware SNNs, the proposed ENA employs a sharing mechanism of computing component at two levels (synapse and neuron) to reduce the occupied resources. The neuron computing core is developed as the key component for the neuron model computation, which is shared by multiple synapses within one neuron cell; and also the computing component of one neuron is shared by several neurons within one layer of the SNN system. A test bench experiment is designed for a Xilinx FPGA device and the results demonstrate that the proposed ENA occupies relatively low hardware resources and has the capability to scale for large SNN implementations.

KW - field programmable gate arrays

KW - neural nets

KW - ENA

KW - FPGA-based spiking neural networks

KW - Xilinx FPGA device

KW - computing component sharing mechanism

KW - digital SNN

KW - digital spiking neural networks

KW - efficient neuron architecture

KW - hardware SNNs

KW - neuron computing core

KW - neuron model computation

KW - silicon area reduction

KW - Computational modeling

KW - Computer architecture

KW - Hardware

KW - Integrated circuit modeling

KW - Neurons

KW - Random access memory

KW - Topology

KW - FPGA implementation

KW - efficient design

KW - sharing mechanism

KW - spiking neural networks

U2 - 10.1109/ISSC.2016.7528472

DO - 10.1109/ISSC.2016.7528472

M3 - Conference contribution

SP - 1

EP - 6

BT - Unknown Host Publication

ER -