A novel Out-of-Distribution detection approach for Spiking Neural Networks: Design, fusion, performance evaluation and explainability

被引:3
|
作者
Martinez-Seras, Aitor [1 ,2 ,7 ]
Del Ser, Javier [1 ,3 ]
Lobo, Jesus L. [1 ]
Garcia-Bringas, Pablo [2 ]
Kasabov, Nikola [4 ,5 ,6 ]
机构
[1] TECNALIA, Basque Res & Technol Alliance BRTA, Derio 48160, Spain
[2] Univ Deusto, Bilbao 48007, Spain
[3] Univ Basque Country UPV EHU, Bilbao 48013, Spain
[4] Auckland Univ Technol, Auckland, New Zealand
[5] Ulster Univ, Intelligent Syst Res Ctr, Coleraine, North Ireland
[6] Bulgarian Acad Sci, IICT, Sofia, Bulgaria
[7] Parque Tecnol Bizkaia, Bizkaia 48160, Spain
关键词
Spiking Neural Networks; Out-of-Distribution detection; Explainable artificial intelligence; Model fusion; Relevance attribution;
D O I
10.1016/j.inffus.2023.101943
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Research around Spiking Neural Networks has ignited during the last years due to their advantages when compared to traditional neural networks, including their efficient processing and inherent ability to model complex temporal dynamics. Despite these differences, Spiking Neural Networks face similar issues than other neural computation counterparts when deployed in real-world settings. This work addresses one of the practical circumstances that can hinder the trustworthiness of this family of models: the possibility of querying a trained model with samples far from the distribution of its training data (also referred to as Out-of-Distribution or OoD data). Specifically, this work presents a novel OoD detector that can identify whether test examples input to a Spiking Neural Network belong to the distribution of the data over which it was trained. For this purpose, we characterize the internal activations of the hidden layers of the network in the form of spike count patterns, which lay a basis for determining when the activations induced by a test instance is atypical. Furthermore, a local explanation method is devised to produce attribution maps revealing which parts of the input instance push most towards the detection of an example as an OoD sample. Experimental results are performed over several classic and event-based image classification datasets to compare the performance of the proposed detector to that of other OoD detection schemes from the literature. Our experiments also assess whether the fusion of our proposed approach with other baseline OoD detection schemes can complement and boost the overall OoD detection capability As the obtained results clearly show, the proposed detector performs competitively against such alternative schemes, and when fused together, can significantly improve the detection scores of their constituent individual detectors. Furthermore, the explainability technique associated to our proposal is proven to produce relevance attribution maps that conform to expectations for synthetically created OoD instances.
引用
收藏
页数:20
相关论文
共 50 条
  • [21] Out-of-Distribution Detection for Deep Neural Networks With Isolation Forest and Local Outlier Factor
    Luan, Siyu
    Gu, Zonghua
    Freidovich, Leonid B.
    Jiang, Lili
    Zhao, Qingling
    [J]. IEEE ACCESS, 2021, 9 : 132980 - 132989
  • [22] Timing Performance Benchmarking of Out-of-Distribution Detection Algorithms
    Siyu Luan
    Zonghua Gu
    Amin Saremi
    Leonid Freidovich
    Lili Jiang
    Shaohua Wan
    [J]. Journal of Signal Processing Systems, 2023, 95 : 1355 - 1370
  • [23] Timing Performance Benchmarking of Out-of-Distribution Detection Algorithms
    Luan, Siyu
    Gu, Zonghua
    Saremi, Amin
    Freidovich, Leonid
    Jiang, Lili
    Wan, Shaohua
    [J]. JOURNAL OF SIGNAL PROCESSING SYSTEMS FOR SIGNAL IMAGE AND VIDEO TECHNOLOGY, 2023, 95 (12): : 1355 - 1370
  • [24] Beyond validation accuracy: incorporating out-of-distribution checks, explainability, and adversarial attacks into classifier design
    Hyatt, John S.
    Lee, Michael S.
    [J]. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
  • [25] A Data-centric Framework to Endow Graph Neural Networks with Out-Of-Distribution Detection Ability
    Guo, Yuxin
    Yang, Cheng
    Chen, Yuluo
    Liu, Jixi
    Shi, Chuan
    Du, Junping
    [J]. PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 638 - 648
  • [26] OODCN: Out-Of-Distribution Detection in Capsule Networks for Fault Identification
    Mitiche, Imene
    Salimy, Alireza
    Werner, Falk
    Boreham, Philip
    Nesbitt, Alan
    Morison, Gordon
    [J]. 29TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2021), 2021, : 1686 - 1690
  • [27] Label-Free Model Evaluation with Out-of-Distribution Detection
    Zhu, Fangzhe
    Zhao, Ye
    Liu, Zhengqiong
    Liu, Xueliang
    [J]. APPLIED SCIENCES-BASEL, 2023, 13 (08):
  • [28] Efficient Out-of-Distribution Detection in Digital Pathology Using Multi-Head Convolutional Neural Networks
    Linmans, Jasper
    van der Laak, Jeroen
    Litjens, Geert
    [J]. MEDICAL IMAGING WITH DEEP LEARNING, VOL 121, 2020, 121 : 465 - 478
  • [29] Deep Neural Forest for Out-of-Distribution Detection of Skin Lesion Images
    Li, Xuan
    Desrosiers, Christian
    Liu, Xue
    [J]. IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2023, 27 (01) : 157 - 165
  • [30] Evidence Reconciled Neural Network for Out-of-Distribution Detection in Medical Images
    Fu, Wei
    Chen, Yufei
    Liu, Wei
    Yue, Xiaodong
    Ma, Chao
    [J]. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023, PT III, 2023, 14222 : 305 - 315