Take CARE: Improving Inherent Robustness of Spiking Neural Networks with Channel-wise Activation Recalibration Module

被引:0
|
作者
Zhang, Yan [1 ]
Chen, Cheng [1 ]
Shen, Dian [1 ]
Wang, Meng [1 ]
Wang, Beilun [1 ]
机构
[1] Southeast Univ, Sch Comp Sci & Engn, Nanjing, Peoples R China
基金
中国国家自然科学基金;
关键词
spiking neural network; object classification; adversarial defense; MODEL;
D O I
10.1109/ICDM58522.2023.00092
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spiking Neural Networks (SNNs) are considered the next generation of deep neural networks for their computation efficiency and biological plausibility. Still, SNN models can be fooled with adversarial perturbations and noises. There is an urgent need for building a robust SNN model that can be deployed in safety-critical domains. Recent works successfully proposed some defense methods inspired by those designed for traditional deep neural network models. However, these methods neglect the inherent robustness of SNN models, which has been proven by previous studies. In this paper, we dedicate ourselves to improving the inherent robustness of SNN without additional training. To do that, we unveil that the success of most attacks relies on obfuscating the model activation. Inspired by this phenomenon, we propose a spiking neural network framework Channel-wise Activation Recalibration (CARE) to improve SNN inherent robustness, which is named CARENet. By analyzing the model activation pattern, we prove that the CARE module has a strong capability of activation preservation. We evaluate our method on three benchmarks. Under diverse attacks, including hybrid attacks using multiple attacks, our method shows significant accuracy gains compared to baselines. Furthermore, our framework achieves competitive performance on natural benchmarks.
引用
收藏
页码:828 / 837
页数:10
相关论文
共 20 条
  • [1] Weighted Channel-Wise Decomposed Convolutional Neural Networks
    Yao Lu
    Guangming Lu
    Yuanrong Xu
    Neural Processing Letters, 2019, 50 : 531 - 548
  • [2] Weighted Channel-Wise Decomposed Convolutional Neural Networks
    Lu, Yao
    Lu, Guangming
    Xu, Yuanrong
    NEURAL PROCESSING LETTERS, 2019, 50 (01) : 531 - 548
  • [3] RETRACTED: Channel-Wise Correlation Calibrates Attention Module for Convolutional Neural Networks (Retracted Article)
    Lu, Ziqiang
    Dong, Yanwu
    Li, Jie
    Lu, Ziying
    He, Pengjie
    Ru, Haibo
    JOURNAL OF SENSORS, 2022, 2022
  • [4] Learning Channel-Wise Interactions for Binary Convolutional Neural Networks
    Wang, Ziwei
    Lu, Jiwen
    Tao, Chenxin
    Zhou, Jie
    Tian, Qi
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (10) : 3432 - 3445
  • [5] Learning Channel-wise Interactions for Binary Convolutional Neural Networks
    Wang, Ziwei
    Lu, Jiwen
    Tao, Chenxin
    Zhou, Jie
    Tian, Qi
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 568 - 577
  • [6] CHANNEL REDUNDANCY AND OVERLAP IN CONVOLUTIONAL NEURAL NETWORKS WITH CHANNEL-WISE NNK GRAPHS
    Bonet, David
    Ortega, Antonio
    Ruiz-Hidalgo, Javier
    Shekkizhar, Sarath
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 4328 - 4332
  • [7] CHIP: Channel-Wise Disentangled Interpretation of Deep Convolutional Neural Networks
    Cui, Xinrui
    Wang, Dan
    Wang, Z. Jane
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (10) : 4143 - 4156
  • [8] Improving robustness of convolutional neural networks using element-wise activation scaling
    Zhang, Zhi-Yuan
    Ren, Hao
    He, Zhenli
    Zhou, Wei
    Liu, Di
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2023, 149 : 136 - 148
  • [9] ChannelNets: Compact and Efficient Convolutional Neural Networks via Channel-Wise Convolutions
    Gao, Hongyang
    Wang, Zhengyang
    Ji, Shuiwang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [10] ChannelNets: Compact and Efficient Convolutional Neural Networks via Channel-Wise Convolutions
    Gao, Hongyang
    Wang, Zhengyang
    Cai, Lei
    Ji, Shuiwang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (08) : 2570 - 2581