Using Single-Step Adversarial Training to Defend Iterative Adversarial Examples

被引:8
|
作者
Liu, Guanxiong [1 ]
Khalil, Issa [2 ]
Khreishah, Abdallah [1 ]
机构
[1] New Jersey Inst Technol, Newark, NJ 07102 USA
[2] Qatar Comp Res Inst, Doha, Qatar
关键词
adversarial machine learning; adversarial training;
D O I
10.1145/3422337.3447841
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Adversarial examples are among the biggest challenges for machine learning models, especially neural network classifiers. Adversarial examples are inputs manipulated with perturbations insignificant to humans while being able to fool machine learning models. Researchers achieve great progress in utilizing adversarial training as a defense. However, the overwhelming computational cost degrades its applicability, and little has been done to overcome this issue. Single-Step adversarial training methods have been proposed as computationally viable solutions; however, they still fail to defend against iterative adversarial examples. In this work, we first experimentally analyze several different state-of-the-art (SOTA) defenses against adversarial examples. Then, based on observations from experiments, we propose a novel single-step adversarial training method that can defend against both single-step and iterative adversarial examples. Through extensive evaluations, we demonstrate that our proposed method successfully combines the advantages of both single-step (low training overhead) and iterative (high robustness) adversarial training defenses. Compared with ATDA on the CIFAR-10 dataset, for example, our proposed method achieves a 35.67% enhancement in test accuracy and a 19.14% reduction in training time. When compared with methods that use BIM or Madry examples (iterative methods) on the CIFAR-10 dataset, our proposed method saves up to 76.03% in training time, with less than 3.78% degeneration in test accuracy. Finally, our experiments with the ImageNet dataset clearly show the scalability of our approach and its performance advantages over SOTA single-step approaches.
引用
下载
收藏
页码:17 / 27
页数:11
相关论文
共 50 条
  • [1] Robust Single-Step Adversarial Training with Regularizer
    Xie, Lehui
    Wang, Yaopeng
    Yin, Jia-Li
    Liu, Ximeng
    PATTERN RECOGNITION AND COMPUTER VISION, PT IV, 2021, 13022 : 29 - 41
  • [2] Single-step Adversarial training with Dropout Scheduling
    Vivek, B. S.
    Babu, R. Venkatesh
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 947 - 956
  • [3] Single-step Adversarial Training for Semantic Segmentation
    Wiens, Daniel
    Hammer, Barbara
    PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION APPLICATIONS AND METHODS (ICPRAM), 2021, : 179 - 187
  • [4] Understanding Catastrophic Overfitting in Single-step Adversarial Training
    Kim, Hoki
    Lee, Woojin
    Lee, Jaewook
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 8119 - 8127
  • [5] Improving Single-Step Adversarial Training By Local Smoothing
    Wang, Shaopeng
    Huang, Yanhong
    Shi, Jianqi
    Yang, Yang
    Guo, Xin
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [6] Revisiting single-step adversarial training for robustness and generalization
    Li, Zhuorong
    Yu, Daiwei
    Wu, Minghui
    Chan, Sixian
    Yu, Hongchuan
    Han, Zhike
    PATTERN RECOGNITION, 2024, 151
  • [7] Make Some Noise: Reliable and Efficient Single-Step Adversarial Training
    de Jorge, Pau
    Bibi, Adel
    Volpi, Riccardo
    Sanyal, Amartya
    Torr, Philip H. S.
    Rogez, Gregory
    Dokania, Puneet K.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [8] Effective Single-Step Adversarial Training With Energy-Based Models
    Tang, Keke
    Lou, Tianrui
    Peng, Weilong
    Chen, Nenglun
    Shi, Yawen
    Wang, Wenping
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024, 8 (05): : 1 - 12
  • [9] Plug-And-Pipeline: Efficient Regularization for Single-Step Adversarial Training
    Vivek, B. S.
    Revanur, Ambareesh
    Venkat, Naveen
    Babu, R. Venkatesh
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 138 - 146
  • [10] Using Local Convolutional Units to Defend Against Adversarial Examples
    Kocian, Matej
    Pilat, Martin
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,