Adversarial Deep Learning: A Survey on Adversarial Attacks and Defense Mechanisms on Image Classification

被引:20
|
作者
Khamaiseh, Samer Y. [1 ]
Bagagem, Derek [2 ]
Al-Alaj, Abdullah [3 ]
Mancino, Mathew [4 ]
Alomari, Hakam W. [1 ]
机构
[1] Miami Univ, Dept Comp Sci & Software Engn, Oxford, OH 45056 USA
[2] Monmouth Univ, Dept Comp Sci & Software Engn, West Long Branch, Long Branch, NJ 07764 USA
[3] Virginia Wesleyan Univ, Dept Comp Sci, Virginia Beach, VA 23455 USA
[4] CACI Int, Norfolk, VA 23455 USA
关键词
Deep learning; Neural networks; Training data; Perturbation methods; Security; Computational modeling; Machine learning algorithms; Deep neural networks; artificial intelligence; adversarial examples; adversarial perturbations; COMPUTER VISION; ROBUSTNESS;
D O I
10.1109/ACCESS.2022.3208131
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The popularity of adapting deep neural networks (DNNs) in solving hard problems has increased substantially. Specifically, in the field of computer vision, DNNs are becoming a core element in developing many image and video classification and recognition applications. However, DNNs are vulnerable to adversarial attacks, in which, given a well-trained image classification model, a malicious input can be crafted by adding mere perturbations to misclassify the image. This phenomena raise many security concerns in utilizing DNNs in critical life applications which attracts the attention of academic and industry researchers. As a result, multiple studies have proposed discussing novel attacks that can compromise the integrity of state-of-the-art image classification neural networks. The raise of these attacks urges the research community to explore countermeasure methods to mitigate these attacks and increase the reliability of adapting DDNs in different major applications. Hence, various defense strategies have been proposed to protect DNNs against adversarial attacks. In this paper, we thoroughly review the most recent and state-of-the-art adversarial attack methods by providing an in-depth analysis and explanation of the working process of these attacks. In our review, we focus on explaining the mathematical concepts and terminologies of the adversarial attacks, which provide a comprehensive and solid survey to the research community. Additionally, we provide a comprehensive review of the most recent defense mechanisms and discuss their effectiveness in defending DNNs against adversarial attacks. Finally, we highlight the current challenges and open issues in this field as well as future research directions.
引用
收藏
页码:102266 / 102291
页数:26
相关论文
共 50 条
  • [1] Adversarial attacks and defenses in deep learning for image recognition: A survey
    Wang, Jia
    Wang, Chengyu
    Lin, Qiuzhen
    Luo, Chengwen
    Wu, Chao
    Li, Jianqiang
    [J]. NEUROCOMPUTING, 2022, 514 : 162 - 181
  • [2] Defense Against Adversarial Attacks in Deep Learning
    Li, Yuancheng
    Wang, Yimeng
    [J]. APPLIED SCIENCES-BASEL, 2019, 9 (01):
  • [3] A Detailed Study on Adversarial attacks and Defense Mechanisms on various Deep Learning Models
    Priya, K., V
    Dinesh, Peter J.
    [J]. 2023 ADVANCED COMPUTING AND COMMUNICATION TECHNOLOGIES FOR HIGH PERFORMANCE APPLICATIONS, ACCTHPA, 2023,
  • [4] Deep Learning Defense Method Against Adversarial Attacks
    Wang, Ling
    Zhang, Cheng
    Liu, Jie
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2020, : 3667 - 3671
  • [5] Adversarial attacks and adversarial training for burn image segmentation based on deep learning
    Chen, Luying
    Liang, Jiakai
    Wang, Chao
    Yue, Keqiang
    Li, Wenjun
    Fu, Zhihui
    [J]. MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING, 2024, 62 (09) : 2717 - 2735
  • [6] Mitigating adversarial evasion attacks by deep active learning for medical image classification
    Ahmed, Usman
    Lin, Jerry Chun-Wei
    Srivastava, Gautam
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (29) : 41899 - 41910
  • [7] Mitigating adversarial evasion attacks by deep active learning for medical image classification
    Usman Ahmed
    Jerry Chun-Wei Lin
    Gautam Srivastava
    [J]. Multimedia Tools and Applications, 2022, 81 : 41899 - 41910
  • [8] Adversarial Attacks and Defense on Deep Learning Classification Models using YCbCr Color Images
    Pestana, Camilo
    Akhtar, Naveed
    Liu, Wei
    Glance, David
    Mian, Ajmal
    [J]. 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [9] Deep learning in image reconstruction: vulnerability under adversarial attacks and potential defense strategies
    Zhang, Chengzhu
    Li, Yinsheng
    Chen, Guang-Hong
    [J]. MEDICAL IMAGING 2021: PHYSICS OF MEDICAL IMAGING, 2021, 11595
  • [10] Threat of Adversarial Attacks within Deep Learning: Survey
    Ata-Us-samad
    Singh, Roshni
    [J]. Recent Advances in Computer Science and Communications, 2023, 16 (07)