TY - GEN
T1 - Self-regulated Learning Algorithm for Distributed Coding Based Spiking Neural Classifier
AU - Machingal, Pranav
AU - Thousif, Mohammed
AU - Dora, Shirin
AU - Sundaram, Suresh
N1 - Will be published but currently awaiting ISSN (final accepted manuscript not yet received).
PY - 2020/3/20
Y1 - 2020/3/20
N2 - This paper proposes a Distributed Coding Spiking Neural Network (DC-SNN) with a self-regulated learning algorithm to deal with pattern classification problems. DC-SNN employs two hidden layers. First hidden layer has receptive field neurons that convert the real-valued input features to spike patterns and the second hidden layer employs LIF neurons with inhibitory interconnections. The second hidden layer has been termed as the distributed coding layer in the rest of the paper. The inhibitory interconnections in distributed coding layer will ensure that each neuron in this layer learns a distinct spike pattern from input feature space. The synaptic weights between layers and the weights of lateral inhibitory connections are learned using a self-regulated learning algorithm. Self-regulation identifies neurons for updating in the output layer and distributed coding layer and also adapts the learning rate based on the temporal separation between spikes in the output layer. It also skips learning from samples which are correctly classified with higher temporal separation and hence prevents over-training. The detailed performance comparisons of DC-SNN with other algorithms for SNNs in the literature using six benchmark data set from the UCI machine learning repository has been presented. Further, the performance of DC-SNN is evaluated on a real-world brain computer interface problem for classification of electroencephalogram (EEG) signals recorded during motor-imagery tasks. The results clearly indicate that the proposed DC-SNN architecture provides slightly better generalization ability and is suitable for deep spiking networks.
AB - This paper proposes a Distributed Coding Spiking Neural Network (DC-SNN) with a self-regulated learning algorithm to deal with pattern classification problems. DC-SNN employs two hidden layers. First hidden layer has receptive field neurons that convert the real-valued input features to spike patterns and the second hidden layer employs LIF neurons with inhibitory interconnections. The second hidden layer has been termed as the distributed coding layer in the rest of the paper. The inhibitory interconnections in distributed coding layer will ensure that each neuron in this layer learns a distinct spike pattern from input feature space. The synaptic weights between layers and the weights of lateral inhibitory connections are learned using a self-regulated learning algorithm. Self-regulation identifies neurons for updating in the output layer and distributed coding layer and also adapts the learning rate based on the temporal separation between spikes in the output layer. It also skips learning from samples which are correctly classified with higher temporal separation and hence prevents over-training. The detailed performance comparisons of DC-SNN with other algorithms for SNNs in the literature using six benchmark data set from the UCI machine learning repository has been presented. Further, the performance of DC-SNN is evaluated on a real-world brain computer interface problem for classification of electroencephalogram (EEG) signals recorded during motor-imagery tasks. The results clearly indicate that the proposed DC-SNN architecture provides slightly better generalization ability and is suitable for deep spiking networks.
UR - https://wcci2020.org/
M3 - Conference contribution
T3 - Proceedings of International Joint Conference on Neural Networks 2020
BT - Proceedings of International Joint Conference on Neural Networks 2020
T2 - international joint conference on neural networks 2020
Y2 - 19 July 2020 through 24 July 2020
ER -