Adversarial Deep Learning: A Survey on Adversarial Attacks and Defense Mechanisms on Image Classification

被引:20
|
作者
Khamaiseh, Samer Y. [1 ]
Bagagem, Derek [2 ]
Al-Alaj, Abdullah [3 ]
Mancino, Mathew [4 ]
Alomari, Hakam W. [1 ]
机构
[1] Miami Univ, Dept Comp Sci & Software Engn, Oxford, OH 45056 USA
[2] Monmouth Univ, Dept Comp Sci & Software Engn, West Long Branch, Long Branch, NJ 07764 USA
[3] Virginia Wesleyan Univ, Dept Comp Sci, Virginia Beach, VA 23455 USA
[4] CACI Int, Norfolk, VA 23455 USA
关键词
Deep learning; Neural networks; Training data; Perturbation methods; Security; Computational modeling; Machine learning algorithms; Deep neural networks; artificial intelligence; adversarial examples; adversarial perturbations; COMPUTER VISION; ROBUSTNESS;
D O I
10.1109/ACCESS.2022.3208131
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The popularity of adapting deep neural networks (DNNs) in solving hard problems has increased substantially. Specifically, in the field of computer vision, DNNs are becoming a core element in developing many image and video classification and recognition applications. However, DNNs are vulnerable to adversarial attacks, in which, given a well-trained image classification model, a malicious input can be crafted by adding mere perturbations to misclassify the image. This phenomena raise many security concerns in utilizing DNNs in critical life applications which attracts the attention of academic and industry researchers. As a result, multiple studies have proposed discussing novel attacks that can compromise the integrity of state-of-the-art image classification neural networks. The raise of these attacks urges the research community to explore countermeasure methods to mitigate these attacks and increase the reliability of adapting DDNs in different major applications. Hence, various defense strategies have been proposed to protect DNNs against adversarial attacks. In this paper, we thoroughly review the most recent and state-of-the-art adversarial attack methods by providing an in-depth analysis and explanation of the working process of these attacks. In our review, we focus on explaining the mathematical concepts and terminologies of the adversarial attacks, which provide a comprehensive and solid survey to the research community. Additionally, we provide a comprehensive review of the most recent defense mechanisms and discuss their effectiveness in defending DNNs against adversarial attacks. Finally, we highlight the current challenges and open issues in this field as well as future research directions.
引用
收藏
页码:102266 / 102291
页数:26
相关论文
共 50 条
  • [31] Adaptive Image Reconstruction for Defense Against Adversarial Attacks
    Yang, Yanan
    Shih, Frank Y.
    Chang, I-Cheng
    [J]. INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2022, 36 (12)
  • [32] Adversarial Machine Learning and Defense Game for NextG Signal Classification with Deep Learning
    Sagduyu, Yalin E.
    [J]. arXiv, 2022,
  • [33] Defense Strategies Against Adversarial Jamming Attacks via Deep Reinforcement Learning
    Wang, Feng
    Zhong, Chen
    Gursoy, M. Cenk
    Velipasalar, Senem
    [J]. 2020 54TH ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS (CISS), 2020, : 336 - 341
  • [34] Adversarial Machine Learning and Defense Game for NextG Signal Classification with Deep Learning
    Sagduyu, Yalin E.
    [J]. 2022 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM), 2022,
  • [35] HFAD: Homomorphic Filtering Adversarial Defense Against Adversarial Attacks in Automatic Modulation Classification
    Zhang, Sicheng
    Lin, Yun
    Yu, Jiarun
    Zhang, Jianting
    Xuan, Qi
    Xu, Dongwei
    Wang, Juzhen
    Wang, Meiyu
    [J]. IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2024, 10 (03) : 880 - 892
  • [36] Deep Intrinsic Decomposition With Adversarial Learning for Hyperspectral Image Classification
    Gong, Zhiqiang
    Qi, Jiahao
    Zhong, Ping
    Zhou, Xian
    Yao, Wen
    [J]. IEEE Transactions on Geoscience and Remote Sensing, 2024, 62
  • [37] DEEP ADVERSARIAL ACTIVE LEARNING WITH MODEL UNCERTAINTY FOR IMAGE CLASSIFICATION
    Zhu, Zheng
    Wang, Hongxing
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 1711 - 1715
  • [38] A Survey on Adversarial Deep Learning Robustness in Medical Image Analysis
    Apostolidis, Kyriakos D.
    Papakostas, George A.
    [J]. ELECTRONICS, 2021, 10 (17)
  • [39] On the Effectiveness of Adversarial Training in Defending against Adversarial Example Attacks for Image Classification
    Park, Sanglee
    So, Jungmin
    [J]. APPLIED SCIENCES-BASEL, 2020, 10 (22): : 1 - 16
  • [40] Text Adversarial Purification as Defense against Adversarial Attacks
    Li, Linyang
    Song, Demin
    Qiu, Xipeng
    [J]. PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 338 - 350