Abstract
Artificial neural networks have significantly advanced pattern recognition, with Spiking Neural Networks (SNNs) standing out for their biological resemblance, mimicking neuronal firing through discrete spikes. This spiking mechanism allows SNNs to operate with lower power consumption compared to traditional networks that rely on continuous real values. Research suggests that SNNs offer superior computational capabilities over earlier models. However, the discontinuous nature of spikes complicates training, making standard gradient based methods less effective and necessitating innovative approaches. This thesis introduces the Class-Dependent Neuronal Activation-based Spiking Neural Network (CDNA-SNN) for pattern classification, inspired by the organisation of biological neuronal assemblies. A novel training algorithm is proposed for CDNA-SNN, which estimates neuronal firing rates when processing samples from different classes via trainable parameters called CDNA. This method categorises neurons into class-specific groups, enhancing synaptic weight training and facilitating the identification and removal of hypoactive neurons. The thesis presents three main contributions derived from the author’s original research. The first contribution introduces CDNA-SNN and its supervised training algorithm for a single-layer network. Performance evaluations on five UCI numerical benchmark datasets reveal that CDNA-SNN performs comparably to other algorithms on 3/5 UCI datasets, while significantly outperforming them on the Liver Disorders dataset (>6.10%, p<0.01). CDNA-SNN also achieves significantly higher accuracies than Synaptic Weight Association Training (SWAT)on the Iris and Breast Cancer datasets (>1.69%, p<0.001) and surpasses SpikeProp on the Iris dataset (1.62%, p=0.04). Notably, CDNA-SNN consistently utilises the smallest network, underscoring its potential as a robust alternative SNN architecture and learning algorithm, though it faces challenges with more complex image datasets such as MNIST. To address this, the second contribution enhances the CDNA-SNN training algorithm for multilayer architectures by introducing a new variant of Spike-Timing Dependent Plasticity (STDP), which uses CDNAs to modulate plasticity. Performance evaluations show that adding a layer to the single-layer CDNA-SNN improves performance. The multilayer CDNA-SNN significantly outperforms SWAT (p<0.0005) and SpikeProp (p<0.05) on 3/5 UCI datasets, and Self-Regulating Evolving Spiking Neural Networks (SRESN) (p<0.05) on 2/5 UCI datasets, while using significantly fewer trainable parameters. Additionally, compared to other supervised, fully connected SNNs, the proposed SNN achieves superior performance on Fashion MNIST and comparable performance on MNIST and Neuromorphic-MNIST (NMNIST), all while utilising 1%–35% fewer parameters. The third contribution refines the CDNA-SNN training algorithm to handle continual learning scenarios, where data is presented to the network in distinct tasks, and the algorithm does not have access to the full dataset from the outset. The modifications include a new end-to-end training algorithm for CDNA-SNN that employs various continual learning mechanisms. Experiments comparing the baseline CDNA-SNN with the enhanced version for continual learning show progressive improvements in average training accuracy with each modification. Comparative evaluations with state-of-the-art sigmoidal neural networks on the Split-MNIST and Split-Fashion MNIST datasets indicate that CDNA-SNN achieves competitive results while using 31.57%-50.17% fewer trainable parameters.Thesis is embargoed until 30 September 2026
Date of Award | Sept 2024 |
---|---|
Original language | English |
Supervisor | Shirin Dora (Supervisor), Damien Coyle (Supervisor) & Martin Mc Ginnity (Supervisor) |
Keywords
- spiking neural networks
- CDNA
- STDP
- neuronal assembly
- CDNA-SNN