An active learning framework for adversarial training of deep neural networks

被引:0
|
作者
Susmita Ghosh [1 ]
Abhiroop Chatterjee [1 ]
Lance Fiondella [2 ]
机构
[1] Jadavpur University,Department of Computer Science and Engineering
[2] University of Massachusetts,Department of Electrical and Computer Engineering
关键词
Adversarial attacks; Deep neural network; FGSM; PGD; Active learning;
D O I
10.1007/s00521-024-10851-6
中图分类号
学科分类号
摘要
This article introduces a novel approach to bolster the robustness of Deep Neural Network (DNN) models against adversarial attacks named “Targeted Adversarial Resilience Learning (TARL)”. The initial evaluation of a baseline DNN model reveals a significant accuracy decline when subjected to adversarial examples generated through techniques like FGSM, PGD, Carlini Wagner, and DeepFool attacks. To address this vulnerability, the article proposes an active learning framework, wherein the model iteratively identifies and learns from the most uncertain and misclassified instances. The key components of this approach include uncertainty estimation score in predicting the class of the input sample, selecting challenging samples based on this uncertainty score, labeling these challenging examples and augmenting them into the training set, and thereafter retraining the model with the expanded training set. The iterative active learning process, governed by parameters such as the number of iterations and batch size, demonstrates the potential to systematically enhance the resilience of DNN against adversarial threats. The proposed methodology has been investigated on several popular datasets such as the SARS-CoV-2 CT scan, MNIST, CIFAR-10, and Caltech-101, and demonstrated to be effective. Experiments illustrate that the learning framework improves the adversarial accuracies from 17.4% to 98.71% for the SARS-CoV-2 dataset, from 8.4% to 99.89% for the MNIST dataset, 1.6% to 78.84% for the CIFAR-10, and 12% to 92.92% for Caltech-101. Further, comparative analysis with several state-of-the-art methods suggests that the proposed framework offers superior defense against various attack methods and offers promising defensive mechanisms to deep neural networks.
引用
收藏
页码:6849 / 6876
页数:27
相关论文
共 50 条
  • [21] Non-Deep Active Learning for Deep Neural Networks
    Kawano, Yasufumi
    Nota, Yoshiki
    Mochizuki, Rinpei
    Aoki, Yoshimitsu
    SENSORS, 2022, 22 (14)
  • [22] A Deep Learning Framework for Automated Transfer Learning of Neural Networks
    Balaiah, Thanasekhar
    Jeyadoss, Timothy Jones Thomas
    Thirumurugan, Sainee
    Ravi, Rahul Chander
    2019 11TH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTING (ICOAC 2019), 2019, : 428 - 432
  • [23] A framework for training larger networks for deep Reinforcement learning
    Ota, Kei
    Jha, Devesh K.
    Kanezaki, Asako
    MACHINE LEARNING, 2024, 113 (09) : 6115 - 6139
  • [24] Training Deep Neural Networks with Constrained Learning Parameters
    Date, Prasanna
    Carothers, Christopher D.
    Mitchell, John E.
    Hendler, James A.
    Magdon-Ismail, Malik
    2020 INTERNATIONAL CONFERENCE ON REBOOTING COMPUTING (ICRC 2020), 2020, : 107 - 115
  • [25] Theoretical Investigation of Generalization Bounds for Adversarial Learning of Deep Neural Networks
    Qingyi Gao
    Xiao Wang
    Journal of Statistical Theory and Practice, 2021, 15
  • [26] ROBUST SENSIBLE ADVERSARIAL LEARNING OF DEEP NEURAL NETWORKS FOR IMAGE CLASSIFICATION
    Kim, Jungeum
    Wang, Xiao
    ANNALS OF APPLIED STATISTICS, 2023, 17 (02): : 961 - 984
  • [27] Theoretical Investigation of Generalization Bounds for Adversarial Learning of Deep Neural Networks
    Gao, Qingyi
    Wang, Xiao
    JOURNAL OF STATISTICAL THEORY AND PRACTICE, 2021, 15 (02)
  • [28] Framework for the Training of Deep Neural Networks in TensorFlow Using Metaheuristics
    Munoz-Ordonez, Julian
    Cobos, Carlos
    Mendoza, Martha
    Herrera-Viedma, Enrique
    Herrera, Francisco
    Tabik, Siham
    INTELLIGENT DATA ENGINEERING AND AUTOMATED LEARNING - IDEAL 2018, PT I, 2018, 11314 : 801 - 811
  • [29] Complete Defense Framework to Protect Deep Neural Networks against Adversarial Examples
    Sun, Guangling
    Su, Yuying
    Qin, Chuan
    Xu, Wenbo
    Lu, Xiaofeng
    Ceglowski, Andrzej
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2020, 2020
  • [30] An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural Networks
    Zhao, Pu
    Liu, Sijia
    Wang, Yanzhi
    Lin, Xue
    PROCEEDINGS OF THE 2018 ACM MULTIMEDIA CONFERENCE (MM'18), 2018, : 1065 - 1073