Robustness and Transferability of Adversarial Attacks on Different Image Classification Neural Networks

被引:1
|
作者
Smagulova, Kamilya [1 ]
Bacha, Lina [2 ]
Fouda, Mohammed E. [3 ]
Kanj, Rouwaida [2 ]
Eltawil, Ahmed [1 ]
机构
[1] King Abdullah Univ Sci & Technol, Div Comp Elect & Math Sci & Engn CEMSE, Thuwal 23955, Saudi Arabia
[2] Amer Univ Beirut, Dept Elect & Comp Engn, Beirut 11072020, Lebanon
[3] Rain Neuromorph Inc, San Francisco, CA 94110 USA
关键词
adversarial attacks; robustness; transferability; CCT; VGG; SpinalNet; ART toolbox;
D O I
10.3390/electronics13030592
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent works demonstrated that imperceptible perturbations to input data, known as adversarial examples, can mislead neural networks' output. Moreover, the same adversarial sample can be transferable and used to fool different neural models. Such vulnerabilities impede the use of neural networks in mission-critical tasks. To the best of our knowledge, this is the first paper that evaluates the robustness of emerging CNN- and transformer-inspired image classifier models such as SpinalNet and Compact Convolutional Transformer (CCT) against popular white- and black-box adversarial attacks imported from the Adversarial Robustness Toolbox (ART). In addition, the adversarial transferability of the generated samples across given models was studied. The tests were carried out on the CIFAR-10 dataset, and the obtained results show that the level of susceptibility of SpinalNet against the same attacks is similar to that of the traditional VGG model, whereas CCT demonstrates better generalization and robustness. The results of this work can be used as a reference for further studies, such as the development of new attacks and defense mechanisms.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] On the Robustness of Bayesian Neural Networks to Adversarial Attacks
    Bortolussi, Luca
    Carbone, Ginevra
    Laurenti, Luca
    Patane, Andrea
    Sanguinetti, Guido
    Wicker, Matthew
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [2] Transferability of features for neural networks links to adversarial attacks and defences
    Kotyan, Shashank
    Matsuki, Moe
    Vargas, Danilo Vasconcellos
    [J]. PLOS ONE, 2022, 17 (04):
  • [3] Universal adversarial attacks on deep neural networks for medical image classification
    Hokuto Hirano
    Akinori Minagi
    Kazuhiro Takemoto
    [J]. BMC Medical Imaging, 21
  • [4] Universal adversarial attacks on deep neural networks for medical image classification
    Hirano, Hokuto
    Minagi, Akinori
    Takemoto, Kazuhiro
    [J]. BMC MEDICAL IMAGING, 2021, 21 (01)
  • [5] Relative Robustness of Quantized Neural Networks Against Adversarial Attacks
    Duncan, Kirsty
    Komendantskaya, Ekaterina
    Stewart, Robert
    Lones, Michael
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [6] Demystifying the Transferability of Adversarial Attacks in Computer Networks
    Nowroozi, Ehsan
    Mekdad, Yassine
    Berenjestanaki, Mohammad Hajian
    Conti, Mauro
    El Fergougui, Abdeslam
    [J]. IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2022, 19 (03): : 3387 - 3400
  • [7] Robustness of Sparsely Distributed Representations to Adversarial Attacks in Deep Neural Networks
    Sardar, Nida
    Khan, Sundas
    Hintze, Arend
    Mehra, Priyanka
    [J]. ENTROPY, 2023, 25 (06)
  • [8] Improving Robustness Against Adversarial Attacks with Deeply Quantized Neural Networks
    Ayaz, Ferheen
    Zakariyya, Idris
    Cano, Jose
    Keoh, Sye Loong
    Singer, Jeremy
    Pau, Danilo
    Kharbouche-Harrari, Mounia
    [J]. 2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [9] Robustness Against Adversarial Attacks in Neural Networks Using Incremental Dissipativity
    Aquino, Bernardo
    Rahnama, Arash
    Seiler, Peter
    Lin, Lizhen
    Gupta, Vijay
    [J]. IEEE CONTROL SYSTEMS LETTERS, 2022, 6 : 2341 - 2346
  • [10] MRobust: A Method for Robustness against Adversarial Attacks on Deep Neural Networks
    Liu, Yi-Ling
    Lomuscio, Alessio
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,