Evaluating the Robustness of Deep Learning Models against Adversarial Attacks: An Analysis with FGSM, PGD and CW

被引:2
|
作者
Villegas-Ch, William [1 ]
Jaramillo-Alcazar, Angel [1 ]
Lujan-Mora, Sergio [2 ]
机构
[1] Univ Las Amer, Escuela Ingn Cibersegur, Fac Ingn Ciencias Aplicadas, Quito 170125, Ecuador
[2] Univ Alicante, Dept Lenguajes & Sistemas Informat, Alicante 03690, Spain
关键词
adversary examples; robustness of models; countermeasures; NEURAL-NETWORKS;
D O I
10.3390/bdcc8010008
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This study evaluated the generation of adversarial examples and the subsequent robustness of an image classification model. The attacks were performed using the Fast Gradient Sign method, the Projected Gradient Descent method, and the Carlini and Wagner attack to perturb the original images and analyze their impact on the model's classification accuracy. Additionally, image manipulation techniques were investigated as defensive measures against adversarial attacks. The results highlighted the model's vulnerability to conflicting examples: the Fast Gradient Signed Method effectively altered the original classifications, while the Carlini and Wagner method proved less effective. Promising approaches such as noise reduction, image compression, and Gaussian blurring were presented as effective countermeasures. These findings underscore the importance of addressing the vulnerability of machine learning models and the need to develop robust defenses against adversarial examples. This article emphasizes the urgency of addressing the threat posed by harmful standards in machine learning models, highlighting the relevance of implementing effective countermeasures and image manipulation techniques to mitigate the effects of adversarial attacks. These efforts are crucial to safeguarding model integrity and trust in an environment marked by constantly evolving hostile threats. An average 25% decrease in accuracy was observed for the VGG16 model when exposed to the Fast Gradient Signed Method and Projected Gradient Descent attacks, and an even more significant 35% decrease with the Carlini and Wagner method.
引用
收藏
页数:23
相关论文
共 50 条
  • [41] On the Robustness of Semantic Segmentation Models to Adversarial Attacks
    Arnab, Anurag
    Miksik, Ondrej
    Torr, Philip H. S.
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 888 - 897
  • [42] On the Robustness of Semantic Segmentation Models to Adversarial Attacks
    Arnab, Anurag
    Miksik, Ondrej
    Torr, Philip H. S.
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2020, 42 (12) : 3040 - 3053
  • [43] Stealthy and Efficient Adversarial Attacks against Deep Reinforcement Learning
    Sun, Jianwen
    Zhang, Tianwei
    Xie, Xiaofei
    Ma, Lei
    Zheng, Yan
    Chen, Kangjie
    Liu, Yang
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 5883 - 5891
  • [44] Deep Reinforcement Adversarial Learning Against Botnet Evasion Attacks
    Apruzzese, Giovanni
    Andreolini, Mauro
    Marchetti, Mirco
    Venturi, Andrea
    Colajanni, Michele
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2020, 17 (04): : 1975 - 1987
  • [45] Analyzing Adversarial Attacks against Deep Learning for Robot Navigation
    Ibn Khedher, Mohamed
    Rezzoug, Mehdi
    ICAART: PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE - VOL 2, 2021, : 1114 - 1121
  • [46] Assured Deep Learning: Practical Defense Against Adversarial Attacks
    Rouhani, Bita Darvish
    Samragh, Mohammad
    Javaheripi, Mojan
    Javidi, Tara
    Koushanfar, Farinaz
    2018 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN (ICCAD) DIGEST OF TECHNICAL PAPERS, 2018,
  • [47] Towards Evaluating Adversarial Attacks Robustness in Wireless Communication
    Ftaimi, Asmaa
    Mazri, Tomader
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2021, 12 (06) : 639 - 646
  • [48] A Framework for Robust Deep Learning Models Against Adversarial Attacks Based on a Protection Layer Approach
    Al-Andoli, Mohammed Nasser
    Tan, Shing Chiang
    Sim, Kok Swee
    Goh, Pey Yun
    Lim, Chee Peng
    IEEE ACCESS, 2024, 12 : 17522 - 17540
  • [49] Exploring Security Vulnerabilities of Deep Learning Models by Adversarial Attacks
    Fu, Xiaopeng
    Gu, Zhaoquan
    Han, Weihong
    Qian, Yaguan
    Wang, Bin
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2021, 2021
  • [50] Exploring Security Vulnerabilities of Deep Learning Models by Adversarial Attacks
    Fu, Xiaopeng
    Gu, Zhaoquan
    Han, Weihong
    Qian, Yaguan
    Wang, Bin
    Wireless Communications and Mobile Computing, 2021, 2021