An active learning framework for adversarial training of deep neural networks

被引:0
|
作者
Susmita Ghosh [1 ]
Abhiroop Chatterjee [1 ]
Lance Fiondella [2 ]
机构
[1] Jadavpur University,Department of Computer Science and Engineering
[2] University of Massachusetts,Department of Electrical and Computer Engineering
关键词
Adversarial attacks; Deep neural network; FGSM; PGD; Active learning;
D O I
10.1007/s00521-024-10851-6
中图分类号
学科分类号
摘要
This article introduces a novel approach to bolster the robustness of Deep Neural Network (DNN) models against adversarial attacks named “Targeted Adversarial Resilience Learning (TARL)”. The initial evaluation of a baseline DNN model reveals a significant accuracy decline when subjected to adversarial examples generated through techniques like FGSM, PGD, Carlini Wagner, and DeepFool attacks. To address this vulnerability, the article proposes an active learning framework, wherein the model iteratively identifies and learns from the most uncertain and misclassified instances. The key components of this approach include uncertainty estimation score in predicting the class of the input sample, selecting challenging samples based on this uncertainty score, labeling these challenging examples and augmenting them into the training set, and thereafter retraining the model with the expanded training set. The iterative active learning process, governed by parameters such as the number of iterations and batch size, demonstrates the potential to systematically enhance the resilience of DNN against adversarial threats. The proposed methodology has been investigated on several popular datasets such as the SARS-CoV-2 CT scan, MNIST, CIFAR-10, and Caltech-101, and demonstrated to be effective. Experiments illustrate that the learning framework improves the adversarial accuracies from 17.4% to 98.71% for the SARS-CoV-2 dataset, from 8.4% to 99.89% for the MNIST dataset, 1.6% to 78.84% for the CIFAR-10, and 12% to 92.92% for Caltech-101. Further, comparative analysis with several state-of-the-art methods suggests that the proposed framework offers superior defense against various attack methods and offers promising defensive mechanisms to deep neural networks.
引用
收藏
页码:6849 / 6876
页数:27
相关论文
共 50 条
  • [1] AccelAT: A Framework for Accelerating the Adversarial Training of Deep Neural Networks Through Accuracy Gradient
    Nikfam, Farzad
    Marchisio, Alberto
    Martina, Maurizio
    Shafique, Muhammad
    IEEE Access, 2022, 10 : 108997 - 109007
  • [2] AccelAT: A Framework for Accelerating the Adversarial Training of Deep Neural Networks Through Accuracy Gradient
    Nikfam, Farzad
    Marchisio, Alberto
    Martina, Maurizio
    Shafique, Muhammad
    IEEE ACCESS, 2022, 10 : 108997 - 109007
  • [3] Diversity Adversarial Training against Adversarial Attack on Deep Neural Networks
    Kwon, Hyun
    Lee, Jun
    SYMMETRY-BASEL, 2021, 13 (03):
  • [4] Learning Secured Modulation With Deep Adversarial Neural Networks
    Mohammed, Hesham
    Saha, Dola
    2020 IEEE 92ND VEHICULAR TECHNOLOGY CONFERENCE (VTC2020-FALL), 2020,
  • [5] A Framework for Enhancing Deep Neural Networks Against Adversarial Malware
    Li, Deqiang
    Li, Qianmu
    Ye, Yanfang
    Xu, Shouhuai
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2021, 8 (01): : 736 - 750
  • [6] Fast Training of Deep Neural Networks Robust to Adversarial Perturbations
    Goodwin, Justin
    Brown, Olivia
    Helus, Victoria
    2020 IEEE HIGH PERFORMANCE EXTREME COMPUTING CONFERENCE (HPEC), 2020,
  • [7] A novel framework for detecting social bots with deep neural networks and active learning
    Wu, Yuhao
    Fang, Yuzhou
    Shang, Shuaikang
    Jin, Jing
    Wei, Lai
    Wang, Haizhou
    KNOWLEDGE-BASED SYSTEMS, 2021, 211
  • [8] Accelerating the Training of Convolutional Neural Networks for Image Segmentation with Deep Active Learning
    Chen, Weitao
    Salay, Rick
    Sedwards, Sean
    Abdelzad, Vahdat
    Czarnecki, Krzysztof
    2020 IEEE 23RD INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2020,
  • [9] Hardware Accelerator for Adversarial Attacks on Deep Learning Neural Networks
    Guo, Haoqiang
    Peng, Lu
    Zhang, Jian
    Qi, Fang
    Duan, Lide
    2019 TENTH INTERNATIONAL GREEN AND SUSTAINABLE COMPUTING CONFERENCE (IGSC), 2019,
  • [10] Enhancing adversarial robustness for deep metric learning via neural discrete adversarial training
    Li, Chaofei
    Zhu, Ziyuan
    Niu, Ruicheng
    Zhao, Yuting
    COMPUTERS & SECURITY, 2024, 143