Exploring Adversarial Attack in Spiking Neural Networks With Spike-Compatible Gradient

被引:29
|
作者
Liang, Ling [1 ]
Hu, Xing [2 ]
Deng, Lei [3 ]
Wu, Yujie [3 ]
Li, Guoqi [3 ]
Ding, Yufei [4 ]
Li, Peng [1 ]
Xie, Yuan [1 ]
机构
[1] Univ Calif Santa Barbara, Dept Elect & Comp Engn, Santa Barbara, CA 93106 USA
[2] Chinese Acad Sci, Inst Comp Technol, State Key Lab Comp Architecture, Beijing 100190, Peoples R China
[3] Tsinghua Univ, Ctr Brain Inspired Comp Res, Dept Precis Instrument, Beijing 100084, Peoples R China
[4] Univ Calif Santa Barbara, Dept Comp Sci, Santa Barbara, CA 93106 USA
关键词
Spatiotemporal phenomena; Computational modeling; Perturbation methods; Biological neural networks; Backpropagation; Unsupervised learning; Training; Adversarial attack; backpropagation through time (BPTT); neuromorphic computing; spike-compatible gradient; spiking neural networks (SNNs);
D O I
10.1109/TNNLS.2021.3106961
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Spiking neural network (SNN) is broadly deployed in neuromorphic devices to emulate brain function. In this context, SNN security becomes important while lacking in-depth investigation. To this end, we target the adversarial attack against SNNs and identify several challenges distinct from the artificial neural network (ANN) attack: 1) current adversarial attack is mainly based on gradient information that presents in a spatiotemporal pattern in SNNs, hard to obtain with conventional backpropagation algorithms; 2) the continuous gradient of the input is incompatible with the binary spiking input during gradient accumulation, hindering the generation of spike-based adversarial examples; and 3) the input gradient can be all-zeros (i.e., vanishing) sometimes due to the zero-dominant derivative of the firing function. Recently, backpropagation through time (BPTT)-inspired learning algorithms are widely introduced into SNNs to improve the performance, which brings the possibility to attack the models accurately given spatiotemporal gradient maps. We propose two approaches to address the above challenges of gradient-input incompatibility and gradient vanishing. Specifically, we design a gradient-to-spike (G2S) converter to convert continuous gradients to ternary ones compatible with spike inputs. Then, we design a restricted spike flipper (RSF) to construct ternary gradients that can randomly flip the spike inputs with a controllable turnover rate, when meeting all-zero gradients. Putting these methods together, we build an adversarial attack methodology for SNNs. Moreover, we analyze the influence of the training loss function and the firing threshold of the penultimate layer on the attack effectiveness. Extensive experiments are conducted to validate our solution. Besides the quantitative analysis of the influence factors, we also compare SNNs and ANNs against adversarial attacks under different attack methods. This work can help reveal what happens in SNN attacks and might stimulate more research on the security of SNN models and neuromorphic devices.
引用
收藏
页码:2569 / 2583
页数:15
相关论文
共 50 条
  • [41] Early Termination of STDP Learning with Spike Counts in Spiking Neural Networks
    Choi, Sunghyun
    Park, Jongsun
    2020 17TH INTERNATIONAL SOC DESIGN CONFERENCE (ISOCC 2020), 2020, : 75 - 76
  • [42] Probabilistic Spike Propagation for Efficient Hardware Implementation of Spiking Neural Networks
    Nallathambi, Abinand
    Sen, Sanchari
    Raghunathan, Anand
    Chandrachoodan, Nitin
    FRONTIERS IN NEUROSCIENCE, 2021, 15
  • [43] Guaranteeing Spike Arrival Time in Multiboard & Multichip Spiking Neural Networks
    Belhadj, Bilel
    Tomas, Jean
    Malot, Olivia
    Bornat, Yannick
    N'Kaoua, Gilles
    Renaud, Sylvie
    2010 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, 2010, : 377 - 380
  • [44] Fractional-order spike-timing-dependent gradient descent for multi-layer spiking neural networks
    Yang, Yi
    Voyles, Richard M.
    Zhang, Haiyan H.
    Nawrocki, Robert A.
    NEUROCOMPUTING, 2025, 611
  • [45] Weight Quantization Method for Spiking Neural Networks and Analysis of Adversarial Robustness
    Li Y.
    Li Y.
    Cui X.
    Ni Q.
    Zhou Y.
    Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2023, 45 (09): : 3218 - 3227
  • [46] ADMM Attack: An Enhanced Adversarial Attack for Deep Neural Networks with Undetectable Distortions
    Zhao, Pu
    Xu, Kaidi
    Liu, Sijia
    Wang, Yanzhi
    Lin, Xue
    24TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE (ASP-DAC 2019), 2019, : 499 - 505
  • [47] Adversarial Attack Against Convolutional Neural Network via Gradient Approximation
    Wang, Zehao
    Li, Xiaoran
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT VI, ICIC 2024, 2024, 14867 : 221 - 232
  • [48] Natural gradient enables fast sampling in spiking neural networks
    Masset, Paul
    Zavatone-Veth, Jacob A.
    Connor, J. Patrick
    Murthy, Venkatesh N.
    Pehlevan, Cengiz
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [49] Smooth Exact Gradient Descent Learning in Spiking Neural Networks
    Klos, Christian
    Memmesheimer, Raoul-Martin
    PHYSICAL REVIEW LETTERS, 2025, 134 (02)
  • [50] GRADUAL SURROGATE GRADIENT LEARNING IN DEEP SPIKING NEURAL NETWORKS
    Chen, Yi
    Zhang, Silin
    Ren, Shiyu
    Qu, Hong
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 8927 - 8931