Research around Spiking Neural Networks has ignited during the last years due to their advantages when compared to traditional neural networks, including their efficient processing and inherent ability to model complex temporal dynamics. Despite these differences, Spiking Neural Networks face similar issues than other neural computation counterparts when deployed in real-world settings. This work addresses one of the practical circumstances that can hinder the trustworthiness of this family of models: the possibility of querying a trained model with samples far from the distribution of its training data (also referred to as Out-of-Distribution or OoD data). Specifically, this work presents a novel OoD detector that can identify whether test examples input to a Spiking Neural Network belong to the distribution of the data over which it was trained. For this purpose, we characterize the internal activations of the hidden layers of the network in the form of spike count patterns, which lay a basis for determining when the activations induced by a test instance is atypical. Furthermore, a local explanation method is devised to produce attribution maps revealing which parts of the input instance push most towards the detection of an example as an OoD sample. Experimental results are performed over several classic and event-based image classification datasets to compare the performance of the proposed detector to that of other OoD detection schemes from the literature. Our experiments also assess whether the fusion of our proposed approach with other baseline OoD detection schemes can complement and boost the overall OoD detection capability As the obtained results clearly show, the proposed detector performs competitively against such alternative schemes, and when fused together, can significantly improve the detection scores of their constituent individual detectors. Furthermore, the explainability technique associated to our proposal is proven to produce relevance attribution maps that conform to expectations for synthetically created OoD instances.
Bibliographical noteFunding Information:
A. Martinez Seras receives funding support from the Basque Government through its BIKAINTEK PhD support program. J. Del Ser acknowledges funding support from the same institution through the Consolidated Research Group MATHMODE ( IT1456-22 ) and the ELKARTEK program (EGIA, grant no. KK-2022/00119 , and BEREZ-IA, ref. KK-2023/00012 ). P. García Bringas also thanks the funding support from the Basque Government through the Consolidated Research Group D4K - Deusto for Knowledge ( IT1528-22 ) and the ELKARTEK funding grants REMEDY (ref. KK-2021/00091 ) and SIIRSE (ref. KK-2022/00029 ).
© 2023 Elsevier B.V.
- Spiking Neural Networks Out-of-Distribution detection Explainable artificial intelligence Model fusion Relevance attribution
- Out-of-Distribution detection
- Relevance attribution
- explainable artificial intelligence
- Model fusion
- Spiking Neural Networks
- Explainable artificial intelligence