Evaluating the Robustness of Deep Learning Models against Adversarial Attacks: An Analysis with FGSM, PGD and CW

被引:2
|
作者
Villegas-Ch, William [1 ]
Jaramillo-Alcazar, Angel [1 ]
Lujan-Mora, Sergio [2 ]
机构
[1] Univ Las Amer, Escuela Ingn Cibersegur, Fac Ingn Ciencias Aplicadas, Quito 170125, Ecuador
[2] Univ Alicante, Dept Lenguajes & Sistemas Informat, Alicante 03690, Spain
关键词
adversary examples; robustness of models; countermeasures; NEURAL-NETWORKS;
D O I
10.3390/bdcc8010008
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This study evaluated the generation of adversarial examples and the subsequent robustness of an image classification model. The attacks were performed using the Fast Gradient Sign method, the Projected Gradient Descent method, and the Carlini and Wagner attack to perturb the original images and analyze their impact on the model's classification accuracy. Additionally, image manipulation techniques were investigated as defensive measures against adversarial attacks. The results highlighted the model's vulnerability to conflicting examples: the Fast Gradient Signed Method effectively altered the original classifications, while the Carlini and Wagner method proved less effective. Promising approaches such as noise reduction, image compression, and Gaussian blurring were presented as effective countermeasures. These findings underscore the importance of addressing the vulnerability of machine learning models and the need to develop robust defenses against adversarial examples. This article emphasizes the urgency of addressing the threat posed by harmful standards in machine learning models, highlighting the relevance of implementing effective countermeasures and image manipulation techniques to mitigate the effects of adversarial attacks. These efforts are crucial to safeguarding model integrity and trust in an environment marked by constantly evolving hostile threats. An average 25% decrease in accuracy was observed for the VGG16 model when exposed to the Fast Gradient Signed Method and Projected Gradient Descent attacks, and an even more significant 35% decrease with the Carlini and Wagner method.
引用
收藏
页数:23
相关论文
共 50 条
  • [31] Adversarial attacks on deep learning models in smart grids
    Hao, Jingbo
    Tao, Yang
    ENERGY REPORTS, 2022, 8 : 123 - 129
  • [32] Defending AI Models Against Adversarial Attacks in Smart Grids Using Deep Learning
    Sampedro, Gabriel Avelino
    Ojo, Stephen
    Krichen, Moez
    Alamro, Meznah A.
    Mihoub, Alaeddine
    Karovic, Vincent
    IEEE ACCESS, 2024, 12 : 157408 - 157417
  • [33] Evaluating the Robustness of Deep-Learning Algorithm-Selection Models by Evolving Adversarial Instances
    Hart, Emma
    Renau, Quentin
    Sim, Kevin
    Alissa, Mohamad
    PARALLEL PROBLEM SOLVING FROM NATURE-PPSN XVIII, PT II, PPSN 2024, 2024, 15149 : 121 - 136
  • [34] Towards Understanding and Enhancing Robustness of Deep Learning Models against Malicious Unlearning Attacks
    Qian, Wei
    Zhao, Chenxu
    Le, Wei
    Ma, Meiyi
    Huai, Mengdi
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 1932 - 1942
  • [35] A Survey of Adversarial Attacks: An Open Issue for Deep Learning Sentiment Analysis Models
    Vazquez-Hernandez, Monserrat
    Morales-Rosales, Luis Alberto
    Algredo-Badillo, Ignacio
    Fernandez-Gregorio, Sofia Isabel
    Rodriguez-Rangel, Hector
    Cordoba-Tlaxcalteco, Maria-Luisa
    APPLIED SCIENCES-BASEL, 2024, 14 (11):
  • [36] Evaluating Realistic Adversarial Attacks against Machine Learning Models for Windows PE Malware Detection
    Imran, Muhammad
    Appice, Annalisa
    Malerba, Donato
    FUTURE INTERNET, 2024, 16 (05)
  • [37] On the robustness of skeleton detection against adversarial attacks
    Bai, Xiuxiu
    Yang, Ming
    Liu, Zhe
    NEURAL NETWORKS, 2020, 132 : 416 - 427
  • [38] ROBUSTNESS OF SAAK TRANSFORM AGAINST ADVERSARIAL ATTACKS
    Ramanathan, Thiyagarajan
    Manimaran, Abinaya
    You, Suya
    Kuo, C-C Jay
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 2531 - 2535
  • [39] Adversarial Attacks Against Deep Generative Models on Data: A Survey
    Sun, Hui
    Zhu, Tianqing
    Zhang, Zhiqiu
    Jin, Dawei
    Xiong, Ping
    Zhou, Wanlei
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (04) : 3367 - 3388
  • [40] Robustness Against Adversarial Attacks Using Dimensionality
    Chattopadhyay, Nandish
    Chatterjee, Subhrojyoti
    Chattopadhyay, Anupam
    SECURITY, PRIVACY, AND APPLIED CRYPTOGRAPHY ENGINEERING, SPACE 2021, 2022, 13162 : 226 - 241