Robustness and Security in Deep Learning: Adversarial Attacks and Countermeasures

被引:0
|
作者
Kaur, Navjot [1 ]
Singh, Someet [1 ]
Deore, Shailesh Shivaji [2 ]
Vidhate, Deepak A. [3 ]
Haridas, Divya [4 ]
Kosuri, Gopala Varma [5 ]
Kolhe, Mohini Ravindra [6 ]
机构
[1] Lovely Profess Univ, Bengaluru, India
[2] SSVPS Bapusaheb Shivajirao Deore Coll Engn, Dept Comp Engn, Dhule, Maharashtra, India
[3] Dr Vithalrao Vikhe Patil Coll Engn Vilad Ghat, Dept Informat Technol, Ahmednagar, Maharashtra, India
[4] Saveetha Inst Med & Tech Sci SIMTS, Saveetha Sch Engn, Dept Condensed Matter Phys, Chennai 602105, Tamil Nadu, India
[5] SRKR Engn Coll, CSE, Bhimavaram, India
[6] Dr DY Patil Inst Technol, Pune, India
关键词
Deep Learning; Adversarial Attacks; Robustness; Defense Mechanisms; Adversarial Training; Input Preprocessing;
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Deep learning model shave demonstrated remarkable performance across various domains, yet their susceptibility to adversarial attacks remains a significant concern. In this study, we investigate the effectiveness of three defense mechanisms-Baseline (No Defense), Adversarial Training, and Input Preprocessing-in enhancing the robustness of deep learning models against adversarial attacks. The baseline model serves as a reference point, highlighting the vulnerability of deep learning systems to adversarial perturbations. Adversarial Training, involving the augmentation of training data with adversarial examples, significantly improves model resilience, demonstrating higher accuracy under both Fast Gradient Sign Method (FGSM) and Iterative Gradient Sign Method (IGSM) attacks. Similarly, Input Preprocessing techniques mitigate the impact of adversarial perturbations on model predictions by modifying input data before inference. However, each defense mechanism presents trade-offs in terms of computational complexity and performance. Adversarial Training requires additional computational resources and longer training times, while Input Preprocessing techniques may introduce distortions affecting model generalization. Future research directions may focus on developing more sophisticated defense mechanisms, including ensemble methods, gradient masking, and certified defense strategies, to provide robust and reliable deep learning systems in real-world scenarios. This study contributes to a deeper understanding of defense mechanisms against adversarial attacks in deep learning, highlighting the importance of implementing robust strategies to enhance model resilience.
引用
收藏
页码:1250 / 1257
页数:8
相关论文
共 50 条
  • [1] Challenges and Countermeasures for Adversarial Attacks on Deep Reinforcement Learning
    Ilahi, Inaam
    Usama, Muhammad
    Qadir, Junaid
    Janjua, Muhammad Umar
    Al-Fuqaha, Ala
    Hoang, Dinh Thai
    Niyato, Dusit
    [J]. IEEE Transactions on Artificial Intelligence, 2022, 3 (02): : 90 - 109
  • [2] Adversarial Robustness in Deep Learning: Attacks on Fragile Neurons
    Pravin, Chandresh
    Martino, Ivan
    Nicosia, Giuseppe
    Ojha, Varun
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT I, 2021, 12891 : 16 - 28
  • [3] Data Security Issues in Deep Learning: Attacks, Countermeasures, and Opportunities
    Xu, Guowen
    Li, Hongwei
    Ren, Hao
    Yang, Kan
    Deng, Robert H.
    [J]. IEEE COMMUNICATIONS MAGAZINE, 2019, 57 (11) : 116 - 122
  • [4] Exploring Security Vulnerabilities of Deep Learning Models by Adversarial Attacks
    Fu, Xiaopeng
    Gu, Zhaoquan
    Han, Weihong
    Qian, Yaguan
    Wang, Bin
    [J]. WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2021, 2021
  • [5] Exploring Security Vulnerabilities of Deep Learning Models by Adversarial Attacks
    Fu, Xiaopeng
    Gu, Zhaoquan
    Han, Weihong
    Qian, Yaguan
    Wang, Bin
    [J]. Wireless Communications and Mobile Computing, 2021, 2021
  • [6] Unravelling Robustness of Deep Learning Based Face Recognition against Adversarial Attacks
    Goswami, Gaurav
    Ratha, Nalini
    Agarwal, Akshay
    Singh, Richa
    Vatsa, Mayank
    [J]. THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 6829 - 6836
  • [7] Towards Secure Multi-Agent Deep Reinforcement Learning: Adversarial Attacks and Countermeasures
    Zheng, Changgang
    Zhen, Chen
    Xie, Haiyong
    Yang, Shufan
    [J]. 2022 5TH IEEE CONFERENCE ON DEPENDABLE AND SECURE COMPUTING (IEEE DSC 2022), 2022,
  • [8] Robustness of Deep Learning-Based Specific Emitter Identification under Adversarial Attacks
    Sun, Liting
    Ke, Da
    Wang, Xiang
    Huang, Zhitao
    Huang, Kaizhu
    [J]. REMOTE SENSING, 2022, 14 (19)
  • [9] Adversarial Attacks and Defenses in Deep Learning
    Ren, Kui
    Zheng, Tianhang
    Qin, Zhan
    Liu, Xue
    [J]. ENGINEERING, 2020, 6 (03) : 346 - 360
  • [10] Adversarial robustness and attacks for multi-view deep models
    Sun, Xuli
    Sun, Shiliang
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2021, 97