Securing Deep Spiking Neural Networks against Adversarial Attacks through Inherent Structural Parameters

被引:12
|
作者
El-Allami, Rida [1 ]
Marchisio, Alberto [2 ]
Shafique, Muhammad [3 ]
Alouani, Ihsen [1 ]
机构
[1] Univ Polytech Hauts De France, IEMN, CNRS, UMR8520, Valenciennes, France
[2] Tech Univ Wien, Inst Comp Engn, Vienna, Austria
[3] New York Univ Abu Dhabi, Div Engn, Abu Dhabi, U Arab Emirates
关键词
SNN; Spiking Neural Networks; Security; Machine Learning; Deep Learning; Neuromorphic; Adversarial Attacks; Robustness; Parameters; Optimization; Analysis;
D O I
10.23919/DATE51398.2021.9473981
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Learning (DL) algorithms have gained popularity owing to their practical problem-solving capacity. However, they suffer from a serious integrity threat, i.e., their vulnerability to adversarial attacks. In the quest for DL trustworthiness, recent works claimed the inherent robustness of Spiking Neural Networks (SNNs) to these attacks, without considering the variability in their structural spiking parameters. This paper explores the security enhancement of SNNs through internal structural parameters. Specifically, we investigate the SNNs robustness to adversarial attacks with different values of the neuron's firing voltage thresholds and time window boundaries. We thoroughly study SNNs security under different adversarial attacks in the strong white-box setting, with different noise budgets and under variable spiking parameters. Our results show a significant impact of the structural parameters on the SNNs' security, and promising sweet spots can be reached to design trustworthy SNNs with 85% higher robustness than a traditional non-spiking DL system. To the best of our knowledge, this is the first work that investigates the impact of structural parameters on SNNs robustness to adversarial attacks. The proposed contributions and the experimental framework is available online(1) to the community for reproducible research.
引用
收藏
页码:774 / 779
页数:6
相关论文
共 50 条
  • [1] Defending Against Adversarial Attacks in Deep Neural Networks
    You, Suya
    Kuo, C-C Jay
    [J]. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
  • [2] A survey on the vulnerability of deep neural networks against adversarial attacks
    Andy Michel
    Sumit Kumar Jha
    Rickard Ewetz
    [J]. Progress in Artificial Intelligence, 2022, 11 : 131 - 141
  • [3] Adversarial Attacks and Defenses Against Deep Neural Networks: A Survey
    Ozdag, Mesut
    [J]. CYBER PHYSICAL SYSTEMS AND DEEP LEARNING, 2018, 140 : 152 - 161
  • [4] A survey on the vulnerability of deep neural networks against adversarial attacks
    Michel, Andy
    Jha, Sumit Kumar
    Ewetz, Rickard
    [J]. PROGRESS IN ARTIFICIAL INTELLIGENCE, 2022, 11 (02) : 131 - 141
  • [5] Evolving Hyperparameters for Training Deep Neural Networks against Adversarial Attacks
    Liu, Jia
    Jin, Yaochu
    [J]. 2019 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2019), 2019, : 1778 - 1785
  • [6] Is Approximation Universally Defensive Against Adversarial Attacks in Deep Neural Networks?
    Siddique, Ayesha
    Hoque, Khaza Anuarul
    [J]. PROCEEDINGS OF THE 2022 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2022), 2022, : 364 - 369
  • [7] Efficacy of Defending Deep Neural Networks against Adversarial Attacks with Randomization
    Zhou, Yan
    Kantarcioglu, Murat
    Xi, Bowei
    [J]. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS II, 2020, 11413
  • [8] MRobust: A Method for Robustness against Adversarial Attacks on Deep Neural Networks
    Liu, Yi-Ling
    Lomuscio, Alessio
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [9] Detect Adversarial Attacks Against Deep Neural Networks With GPU Monitoring
    Zoppi, Tommaso
    Ceccarelli, Andrea
    [J]. IEEE ACCESS, 2021, 9 : 150579 - 150591
  • [10] Fast adversarial attacks to deep neural networks through gradual sparsification
    Amini, Sajjad
    Heshmati, Alireza
    Ghaemmaghami, Shahrokh
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 127