Regional Adversarial Training for Better Robust Generalization

被引:0
|
作者
Song, Chuanbiao [1 ]
Fan, Yanbo [2 ]
Zhou, Aoyang [1 ]
Wu, Baoyuan [3 ,4 ]
Li, Yiming [5 ]
Li, Zhifeng [2 ]
He, Kun [1 ]
机构
[1] Huazhong Univ Sci & Technol, Wuhan, Hubei, Peoples R China
[2] Tencent, Shenzhen, Guangdong, Peoples R China
[3] Chinese Univ Hong Kong, Shenzhen, Guangdong, Peoples R China
[4] Shenzhen Res Inst Big Data, Shenzhen, Guangdong, Peoples R China
[5] Tsinghua Univ, Beijing, Peoples R China
基金
中国国家自然科学基金; 美国国家科学基金会;
关键词
Regional Adversarial Training; Robustness; Adversarial Defense; Label Smoothing;
D O I
10.1007/s11263-024-02103-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial training (AT) has been demonstrated as one of the most promising defense methods against various adversarial attacks. To our knowledge, existing AT-based methods usually train with the locally most adversarial perturbed points and treat all the perturbed points equally, which may lead to considerably weaker adversarial robust generalization on test data. In this work, we introduce a new adversarial training framework that considers the diversity as well as characteristics of the perturbed points in the vicinity of benign samples. To realize the framework, we propose a Regional Adversarial Training (RAT) defense method that first utilizes the attack path generated by the typical iterative attack method of projected gradient descent (PGD), and constructs an adversarial region based on the attack path. Then, RAT samples diverse perturbed training points efficiently inside this region, and utilizes a distance-aware label smoothing mechanism to capture our intuition that perturbed points at different locations should have different impact on the model performance. Extensive experiments on several benchmark datasets show that RAT consistently makes significant improvement on standard adversarial training (SAT), and exhibits better robust generalization.
引用
收藏
页码:4510 / 4520
页数:11
相关论文
共 50 条
  • [31] Distributionally Robust Learning With Stable Adversarial Training
    Liu, Jiashuo
    Shen, Zheyan
    Cui, Peng
    Zhou, Linjun
    Kuang, Kun
    Li, Bo
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (11) : 11288 - 11300
  • [32] Towards Better Robust Generalization with Shift Consistency Regularization
    Zhang, Shufei
    Qian, Zhuang
    Huang, Kaizhu
    Wang, Qiufeng
    Zhang, Rui
    Yi, Xinping
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [33] Certifying Better Robust Generalization for Unsupervised Domain Adaptation
    Gao, Zhicliang
    Zhang, Shufei
    Huang, Kaizhu
    Wang, Qiufeng
    Zhang, Rui
    Zhong, Chaoliang
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 2399 - 2410
  • [34] Global Wasserstein Margin maximization for boosting generalization in adversarial training
    Tingyue Yu
    Shen Wang
    Xiangzhan Yu
    Applied Intelligence, 2023, 53 : 11490 - 11504
  • [35] Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks
    Wang, Jianyu
    Zhang, Haichao
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 6628 - 6637
  • [36] On Domain Generalization for Batched Prediction: the Benefit of Contextual Adversarial Training
    Li, Chune
    Mao, Yongyi
    Zhang, Richong
    2022 IEEE 34TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2022, : 577 - 584
  • [37] Revisiting single-step adversarial training for robustness and generalization
    Li, Zhuorong
    Yu, Daiwei
    Wu, Minghui
    Chan, Sixian
    Yu, Hongchuan
    Han, Zhike
    PATTERN RECOGNITION, 2024, 151
  • [38] Global Wasserstein Margin maximization for boosting generalization in adversarial training
    Yu, Tingyue
    Wang, Shen
    Yu, Xiangzhan
    APPLIED INTELLIGENCE, 2023, 53 (10) : 11490 - 11504
  • [39] Towards Better Accuracy and Robustness with Localized Adversarial Training
    Rothberg, Eitan
    Chen, Tingting
    Ji, Hao
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 10017 - 10018
  • [40] Better Generalization in Fast Training: Flat TrainableWeight in Subspace
    Lei, Zehao
    Wu, Yingwen
    Li, Tao
    2024 16TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND COMPUTING, ICMLC 2024, 2024, : 470 - 477