Exploring Adversarial Attack in Spiking Neural Networks With Spike-Compatible Gradient

被引:29
|
作者
Liang, Ling [1 ]
Hu, Xing [2 ]
Deng, Lei [3 ]
Wu, Yujie [3 ]
Li, Guoqi [3 ]
Ding, Yufei [4 ]
Li, Peng [1 ]
Xie, Yuan [1 ]
机构
[1] Univ Calif Santa Barbara, Dept Elect & Comp Engn, Santa Barbara, CA 93106 USA
[2] Chinese Acad Sci, Inst Comp Technol, State Key Lab Comp Architecture, Beijing 100190, Peoples R China
[3] Tsinghua Univ, Ctr Brain Inspired Comp Res, Dept Precis Instrument, Beijing 100084, Peoples R China
[4] Univ Calif Santa Barbara, Dept Comp Sci, Santa Barbara, CA 93106 USA
关键词
Spatiotemporal phenomena; Computational modeling; Perturbation methods; Biological neural networks; Backpropagation; Unsupervised learning; Training; Adversarial attack; backpropagation through time (BPTT); neuromorphic computing; spike-compatible gradient; spiking neural networks (SNNs);
D O I
10.1109/TNNLS.2021.3106961
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spiking neural network (SNN) is broadly deployed in neuromorphic devices to emulate brain function. In this context, SNN security becomes important while lacking in-depth investigation. To this end, we target the adversarial attack against SNNs and identify several challenges distinct from the artificial neural network (ANN) attack: 1) current adversarial attack is mainly based on gradient information that presents in a spatiotemporal pattern in SNNs, hard to obtain with conventional backpropagation algorithms; 2) the continuous gradient of the input is incompatible with the binary spiking input during gradient accumulation, hindering the generation of spike-based adversarial examples; and 3) the input gradient can be all-zeros (i.e., vanishing) sometimes due to the zero-dominant derivative of the firing function. Recently, backpropagation through time (BPTT)-inspired learning algorithms are widely introduced into SNNs to improve the performance, which brings the possibility to attack the models accurately given spatiotemporal gradient maps. We propose two approaches to address the above challenges of gradient-input incompatibility and gradient vanishing. Specifically, we design a gradient-to-spike (G2S) converter to convert continuous gradients to ternary ones compatible with spike inputs. Then, we design a restricted spike flipper (RSF) to construct ternary gradients that can randomly flip the spike inputs with a controllable turnover rate, when meeting all-zero gradients. Putting these methods together, we build an adversarial attack methodology for SNNs. Moreover, we analyze the influence of the training loss function and the firing threshold of the penultimate layer on the attack effectiveness. Extensive experiments are conducted to validate our solution. Besides the quantitative analysis of the influence factors, we also compare SNNs and ANNs against adversarial attacks under different attack methods. This work can help reveal what happens in SNN attacks and might stimulate more research on the security of SNN models and neuromorphic devices.
引用
收藏
页码:2569 / 2583
页数:15
相关论文
共 50 条
  • [1] SPA: An Efficient Adversarial Attack on Spiking Neural Networks using Spike Probabilistic
    Lin, Xuanwei
    Dong, Chen
    Liu, Ximeng
    Zhang, Yuanyuan
    2022 22ND IEEE/ACM INTERNATIONAL SYMPOSIUM ON CLUSTER, CLOUD AND INTERNET COMPUTING (CCGRID 2022), 2022, : 366 - 375
  • [2] Rate Gradient Approximation Attack Threats Deep Spiking Neural Networks
    Bu, Tong
    Ding, Jianhao
    Hao, Zecheng
    Yu, Zhaofei
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 7896 - 7906
  • [3] Differentiable Spike: Rethinking Gradient-Descent for Training Spiking Neural Networks
    Li, Yuhang
    Guo, Yufei
    Zhang, Shanghang
    Deng, Shikuang
    Hai, Yongqing
    Gu, Shi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [4] Spike Attention Coding for Spiking Neural Networks
    Liu, Jiawen
    Hu, Yifan
    Li, Guoqi
    Pei, Jing
    Deng, Lei
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (12) : 18892 - 18898
  • [5] Adversarial event patch for Spiking Neural Networks
    Yan, Song
    Fei, Jinlong
    Wei, Hui
    Zhao, Bingbing
    Wang, Zheng
    Yang, Guoliang
    KNOWLEDGE-BASED SYSTEMS, 2025, 309
  • [6] Adversarial Training for Probabilistic Spiking Neural Networks
    Bagheri, Alireza
    Simeone, Osvaldo
    Rajendran, Bipin
    2018 IEEE 19TH INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING ADVANCES IN WIRELESS COMMUNICATIONS (SPAWC), 2018, : 261 - 265
  • [7] Gradient Descent for Spiking Neural Networks
    Huh, Dongsung
    Sejnowski, Terrence J.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [8] Exploring Vulnerabilities in Spiking Neural Networks: Direct Adversarial Attacks on Raw Event Data
    Yao, Yanmeng
    Zhao, Xiaohan
    Gu, Bin
    COMPUTER VISION - ECCV 2024, PT LXXII, 2025, 15130 : 412 - 428
  • [9] Robustness of Spiking Neural Networks Based on Time-to-First-Spike Encoding Against Adversarial Attacks
    Nomura, Osamu
    Sakemi, Yusuke
    Hosomi, Takeo
    Morie, Takashi
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2022, 69 (09) : 3640 - 3644
  • [10] Exploring neural oscillations during speech perception via surrogate gradient spiking neural networks
    Bittar, Alexandre
    Garner, Philip N.
    FRONTIERS IN NEUROSCIENCE, 2024, 18