Unmasking the Vulnerabilities of Deep Learning Models: A Multi-Dimensional Analysis of Adversarial Attacks and Defenses

被引:0
|
作者
Juraev, Firuz [1 ]
Abuhamad, Mohammed [2 ]
Chan-Tin, Eric [2 ]
Thiruvathukal, George K. [2 ]
Abuhmed, Tamer [1 ]
机构
[1] Sungkyunkwan Univ, Dept Comp Sci & Engn, Suwon, South Korea
[2] Loyola Univ, Dept Comp Sci, Chicago, IL USA
基金
新加坡国家研究基金会;
关键词
Threat Analysis; Deep Learning; Black-box Attacks; Adversarial Perturbations; Defensive Techniques;
D O I
10.1109/SVCC61185.2024.10637364
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Learning (DL) is rapidly maturing to the point that it can be used in safety- and security-crucial applications, such as self-driving vehicles, surveillance, drones, and robots. However, adversarial samples, which are undetectable to the human eye, pose a serious threat that can cause the model to misbehave and compromise the performance of such applications. Addressing the robustness of DL models has become crucial to understanding and defending against adversarial attacks. In this study, we perform comprehensive experiments to examine the effect of adversarial attacks and defenses on various model architectures across well-known datasets. Our research focuses on black-box attacks such as SimBA, HopSkipJump, MGAAttack, and boundary attacks, as well as preprocessor-based defensive mechanisms, including bits squeezing, median smoothing, and JPEG filter. Experimenting with various models, our results demonstrate that the level of noise needed for the attack increases as the number of layers increases. Moreover, the attack success rate decreases as the number of layers increases. This indicates that model complexity and robustness have a significant relationship. Investigating the diversity and robustness relationship, our experiments with diverse models show that having a large number of parameters does not imply higher robustness. Our experiments extend to show the effects of the training dataset on model robustness. Using various datasets such as ImageNet-1000, CIFAR-100, and CIFAR-10 are used to evaluate the black-box attacks. Considering the multiple dimensions of our analysis, e.g., model complexity and training dataset, we examined the behavior of black-box attacks when models apply defenses. Our results show that applying defense strategies can significantly reduce attack effectiveness. This research provides in-depth analysis and insight into the robustness of DL models against various attacks, and defenses.
引用
收藏
页数:8
相关论文
共 50 条
  • [21] Adversarial examples: A survey of attacks and defenses in deep learning-enabled cybersecurity systems
    Macas, Mayra
    Wu, Chunming
    Fuertes, Walter
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 238
  • [22] Adversarial robustness and attacks for multi-view deep models
    Sun, Xuli
    Sun, Shiliang
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2021, 97 (97)
  • [23] Exploring adversarial image attacks on deep learning models in oncology
    Joel, Marina
    Umrao, Sachin
    Chang, Enoch
    Choi, Rachel
    Yang, Daniel
    Gilson, Aidan
    Herbst, Roy
    Krumholz, Harlan
    Aneja, Sanjay
    CLINICAL CANCER RESEARCH, 2021, 27 (05)
  • [24] Adversarial Attacks on Deep Learning Models of Computer Vision: A Survey
    Ding, Jia
    Xu, Zhiwu
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2020, PT III, 2020, 12454 : 396 - 408
  • [25] Adaptive Normalized Attacks for Learning Adversarial Attacks and Defenses in Power Systems
    Tian, Jiwei
    Li, Tengyao
    Shang, Fute
    Cao, Kunrui
    Li, Jing
    Ozay, Mete
    2019 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, CONTROL, AND COMPUTING TECHNOLOGIES FOR SMART GRIDS (SMARTGRIDCOMM), 2019,
  • [26] Adversarial Attacks on Pre-trained Deep Learning Models for Encrypted Traffic Analysis
    Seok, Byoungjin
    Sohn, Kiwook
    JOURNAL OF WEB ENGINEERING, 2024, 23 (06): : 749 - 768
  • [27] Adversarial Deep Ensemble: Evasion Attacks and Defenses for Malware Detection
    Li, Deqiang
    Li, Qianmu
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2020, 15 : 3886 - 3900
  • [28] Adversarial Attacks and Defenses Against Deep Neural Networks: A Survey
    Ozdag, Mesut
    CYBER PHYSICAL SYSTEMS AND DEEP LEARNING, 2018, 140 : 152 - 161
  • [29] Real-Time Adversarial Perturbations Against Deep Reinforcement Learning Policies: Attacks and Defenses
    Tekgul, Buse G. A.
    Wang, Shelly
    Marchal, Samuel
    Asokan, N.
    COMPUTER SECURITY - ESORICS 2022, PT III, 2022, 13556 : 384 - 404
  • [30] Adversarial Learning Targeting Deep Neural Network Classification: A Comprehensive Review of Defenses Against Attacks
    Miller, David J.
    Xiang, Zhen
    Kesidis, George
    PROCEEDINGS OF THE IEEE, 2020, 108 (03) : 402 - 433