A novel Out-of-Distribution detection approach for Spiking Neural Networks: Design, fusion, performance evaluation and explainability

被引:3
|
作者
Martinez-Seras, Aitor [1 ,2 ,7 ]
Del Ser, Javier [1 ,3 ]
Lobo, Jesus L. [1 ]
Garcia-Bringas, Pablo [2 ]
Kasabov, Nikola [4 ,5 ,6 ]
机构
[1] TECNALIA, Basque Res & Technol Alliance BRTA, Derio 48160, Spain
[2] Univ Deusto, Bilbao 48007, Spain
[3] Univ Basque Country UPV EHU, Bilbao 48013, Spain
[4] Auckland Univ Technol, Auckland, New Zealand
[5] Ulster Univ, Intelligent Syst Res Ctr, Coleraine, North Ireland
[6] Bulgarian Acad Sci, IICT, Sofia, Bulgaria
[7] Parque Tecnol Bizkaia, Bizkaia 48160, Spain
关键词
Spiking Neural Networks; Out-of-Distribution detection; Explainable artificial intelligence; Model fusion; Relevance attribution;
D O I
10.1016/j.inffus.2023.101943
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Research around Spiking Neural Networks has ignited during the last years due to their advantages when compared to traditional neural networks, including their efficient processing and inherent ability to model complex temporal dynamics. Despite these differences, Spiking Neural Networks face similar issues than other neural computation counterparts when deployed in real-world settings. This work addresses one of the practical circumstances that can hinder the trustworthiness of this family of models: the possibility of querying a trained model with samples far from the distribution of its training data (also referred to as Out-of-Distribution or OoD data). Specifically, this work presents a novel OoD detector that can identify whether test examples input to a Spiking Neural Network belong to the distribution of the data over which it was trained. For this purpose, we characterize the internal activations of the hidden layers of the network in the form of spike count patterns, which lay a basis for determining when the activations induced by a test instance is atypical. Furthermore, a local explanation method is devised to produce attribution maps revealing which parts of the input instance push most towards the detection of an example as an OoD sample. Experimental results are performed over several classic and event-based image classification datasets to compare the performance of the proposed detector to that of other OoD detection schemes from the literature. Our experiments also assess whether the fusion of our proposed approach with other baseline OoD detection schemes can complement and boost the overall OoD detection capability As the obtained results clearly show, the proposed detector performs competitively against such alternative schemes, and when fused together, can significantly improve the detection scores of their constituent individual detectors. Furthermore, the explainability technique associated to our proposal is proven to produce relevance attribution maps that conform to expectations for synthetically created OoD instances.
引用
收藏
页数:20
相关论文
共 50 条
  • [1] Performance analysis of out-of-distribution detection on trained neural networks
    Henriksson, Jens
    Berger, Christian
    Borg, Markus
    Tornberg, Lars
    Sathyamoorthy, Sankar Raman
    Englund, Cristofer
    [J]. INFORMATION AND SOFTWARE TECHNOLOGY, 2021, 130
  • [2] Performance Analysis of Out-of-Distribution Detection on Various Trained Neural Networks
    Henriksson, Jens
    Berger, Christian
    Borg, Markus
    Tornberg, Lars
    Sathyamoorthy, Sankar Raman
    Englund, Cristofer
    [J]. 2019 45TH EUROMICRO CONFERENCE ON SOFTWARE ENGINEERING AND ADVANCED APPLICATIONS (SEAA 2019), 2019, : 113 - 120
  • [3] SmOOD: Smoothness-based Out-of-Distribution Detection Approach for Surrogate Neural Networks in Aircraft Design
    Ben Braiek, Houssem
    Tfaily, Ali
    Khomh, Foutse
    Reid, Thomas
    Guida, Ciro
    [J]. PROCEEDINGS OF THE 37TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, ASE 2022, 2022,
  • [4] Runtime Monitoring for Out-of-Distribution Detection in Object Detection Neural Networks
    Hashemi, Vahid
    Kretinsky, Jan
    Rieder, Sabine
    Schmidt, Jessica
    [J]. FORMAL METHODS, FM 2023, 2023, 14000 : 622 - 634
  • [5] Layer Adaptive Deep Neural Networks for Out-of-Distribution Detection
    Wang, Haoliang
    Zhao, Chen
    Zhao, Xujiang
    Chen, Feng
    [J]. ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2022, PT II, 2022, 13281 : 526 - 538
  • [6] NeuralFP: Out-of-distribution Detection using Fingerprints of Neural Networks
    Lee, Wei-Han
    Millman, Steve
    Desai, Nirmit
    Srivatsa, Mudhakar
    Liu, Changchang
    [J]. 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 9561 - 9568
  • [7] Fixing Robust Out-of-distribution Detection for Deep Neural Networks
    Zhou, Zhiyang
    Liu, Jie
    Dou, Wensheng
    Li, Shuo
    Kang, Liangyi
    Qu, Muzi
    Ye, Dan
    [J]. 2023 IEEE 34TH INTERNATIONAL SYMPOSIUM ON SOFTWARE RELIABILITY ENGINEERING, ISSRE, 2023, : 533 - 544
  • [8] Hyperdimensional Feature Fusion for Out-of-Distribution Detection
    Wilson, Samuel
    Fischer, Tobias
    Sunderhauf, Niko
    Dayoub, Feras
    [J]. 2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 2643 - 2653
  • [9] Interaction of Generalization and Out-of-Distribution Detection Capabilities in Deep Neural Networks
    Aboitiz, Francisco Javier Klaiber
    Legenstein, Robert
    Oezdenizci, Ozan
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PART X, 2023, 14263 : 248 - 259
  • [10] Evaluation of Out-of-Distribution Detection Performance on Autonomous Driving Datasets
    Henriksson, Jens
    Berger, Christian
    Ursing, Stig
    Borg, Markus
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE TESTING, AITEST, 2023, : 74 - 81