Evaluating the Robustness of Deep Learning Models against Adversarial Attacks: An Analysis with FGSM, PGD and CW

被引:2
|
作者
Villegas-Ch, William [1 ]
Jaramillo-Alcazar, Angel [1 ]
Lujan-Mora, Sergio [2 ]
机构
[1] Univ Las Amer, Escuela Ingn Cibersegur, Fac Ingn Ciencias Aplicadas, Quito 170125, Ecuador
[2] Univ Alicante, Dept Lenguajes & Sistemas Informat, Alicante 03690, Spain
关键词
adversary examples; robustness of models; countermeasures; NEURAL-NETWORKS;
D O I
10.3390/bdcc8010008
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This study evaluated the generation of adversarial examples and the subsequent robustness of an image classification model. The attacks were performed using the Fast Gradient Sign method, the Projected Gradient Descent method, and the Carlini and Wagner attack to perturb the original images and analyze their impact on the model's classification accuracy. Additionally, image manipulation techniques were investigated as defensive measures against adversarial attacks. The results highlighted the model's vulnerability to conflicting examples: the Fast Gradient Signed Method effectively altered the original classifications, while the Carlini and Wagner method proved less effective. Promising approaches such as noise reduction, image compression, and Gaussian blurring were presented as effective countermeasures. These findings underscore the importance of addressing the vulnerability of machine learning models and the need to develop robust defenses against adversarial examples. This article emphasizes the urgency of addressing the threat posed by harmful standards in machine learning models, highlighting the relevance of implementing effective countermeasures and image manipulation techniques to mitigate the effects of adversarial attacks. These efforts are crucial to safeguarding model integrity and trust in an environment marked by constantly evolving hostile threats. An average 25% decrease in accuracy was observed for the VGG16 model when exposed to the Fast Gradient Signed Method and Projected Gradient Descent attacks, and an even more significant 35% decrease with the Carlini and Wagner method.
引用
收藏
页数:23
相关论文
共 50 条
  • [21] MRobust: A Method for Robustness against Adversarial Attacks on Deep Neural Networks
    Liu, Yi-Ling
    Lomuscio, Alessio
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [22] Analyzing the Robustness of Deep Learning Against Adversarial Examples
    Zhao, Jun
    2018 56TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2018, : 1060 - 1064
  • [23] Towards evaluating the robustness of deep diagnostic models by adversarial attack
    Xu, Mengting
    Zhang, Tao
    Li, Zhongnian
    Liu, Mingxia
    Zhang, Daoqiang
    MEDICAL IMAGE ANALYSIS, 2021, 69
  • [24] On the Robustness of Deep Learning Models to Universal Adversarial Attack
    Karim, Rezaul
    Islam, Md Amirul
    Mohammed, Noman
    Bruce, Neil D. B.
    2018 15TH CONFERENCE ON COMPUTER AND ROBOT VISION (CRV), 2018, : 55 - 62
  • [25] Deep Learning Defense Method Against Adversarial Attacks
    Wang, Ling
    Zhang, Cheng
    Liu, Jie
    2020 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2020, : 3667 - 3671
  • [26] Evaluating and enhancing the robustness of vision transformers against adversarial attacks in medical imaging
    Kanca, Elif
    Ayas, Selen
    Kablan, Elif Baykal
    Ekinci, Murat
    MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING, 2024, : 673 - 690
  • [27] RobCaps: Evaluating the Robustness of Capsule Networks against Affine Transformations and Adversarial Attacks
    Marchisio, Alberto
    De Marco, Antonio
    Colucci, Alessio
    Martina, Maurizio
    Shafique, Muhammad
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [28] A novel method for improving the robustness of deep learning-based malware detectors against adversarial attacks
    Shaukat, Kamran
    Luo, Suhuai
    Varadharajan, Vijay
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 116
  • [29] Using Options to Improve Robustness of Imitation Learning Against Adversarial Attacks
    Dasgupta, Prithviraj
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS III, 2021, 11746
  • [30] Lateralized Learning for Robustness Against Adversarial Attacks in a Visual Classification System
    Siddique, Abubakar
    Browne, Will N.
    Grimshaw, Gina M.
    GECCO'20: PROCEEDINGS OF THE 2020 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, 2020, : 395 - 403