Provable Unrestricted Adversarial Training Without Compromise With Generalizability

被引:1
|
作者
Zhang, Lilin [1 ]
Yang, Ning [1 ]
Sun, Yanchao [2 ]
Yu, Philip S. [3 ]
机构
[1] Sichuan Univ, Sch Comp Sci, Chengdu 610017, Peoples R China
[2] Univ Maryland, Dept Comp Sci, College Pk, MD 20742 USA
[3] Univ Illinois, Dept Comp Sci, Chicago, IL 60607 USA
基金
中国国家自然科学基金;
关键词
Robustness; Training; Standards; Perturbation methods; Stars; Optimization; Computer science; Adversarial robustness; adversarial training; unrestricted adversarial examples; standard generalizability;
D O I
10.1109/TPAMI.2024.3400988
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial training (AT) is widely considered as the most promising strategy to defend against adversarial attacks and has drawn increasing interest from researchers. However, the existing AT methods still suffer from two challenges. First, they are unable to handle unrestricted adversarial examples (UAEs), which are built from scratch, as opposed to restricted adversarial examples (RAEs), which are created by adding perturbations bound by an l(p) norm to observed examples. Second, the existing AT methods often achieve adversarial robustness at the expense of standard generalizability (i.e., the accuracy on natural examples) because they make a tradeoff between them. To overcome these challenges, we propose a unique viewpoint that understands UAEs as imperceptibly perturbed unobserved examples. Also, we find that the tradeoff results from the separation of the distributions of adversarial examples and natural examples. Based on these ideas, we propose a novel AT approach called Provable Unrestricted Adversarial Training (PUAT), which can provide a target classifier with comprehensive adversarial robustness against both UAE and RAE, and simultaneously improve its standard generalizability. Particularly, PUAT utilizes partially labeled data to achieve effective UAE generation by accurately capturing the natural data distribution through a novel augmented triple-GAN. At the same time, PUAT extends the traditional AT by introducing the supervised loss of the target classifier into the adversarial loss and achieves the alignment between the UAE distribution, the natural data distribution, and the distribution learned by the classifier, with the collaboration of the augmented triple-GAN. Finally, the solid theoretical analysis and extensive experiments conducted on widely-used benchmarks demonstrate the superiority of PUAT.
引用
收藏
页码:8302 / 8319
页数:18
相关论文
共 50 条
  • [31] ACCESSIBILITY WITHOUT COMPROMISE
    LANDAU, N
    SOCIALIST REVIEW, 1984, (75-7) : 202 - 202
  • [32] COMPROMISE WITHOUT SOLUTION
    不详
    NATURE, 1971, 232 (5313) : 593 - +
  • [33] Man without compromise
    Axelrad, C
    CRITIQUE, 2006, 62 (707) : 351 - 361
  • [34] Effective Universal Unrestricted Adversarial Attacks Using a MOE Approach
    Baia, Alina Elena
    Di Bari, Gabriele
    Poggioni, Valentina
    APPLICATIONS OF EVOLUTIONARY COMPUTATION, EVOAPPLICATIONS 2021, 2021, 12694 : 552 - 567
  • [35] Cost-free adversarial defense: Distance-based optimization for model robustness without adversarial training
    Seo, Seungwan
    Lee, Yunseung
    Kang, Pilsung
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2023, 227
  • [36] AdvST: Generating Unrestricted Adversarial Images via Style Transfer
    Wang, Xiaomeng
    Chen, Honglong
    Sun, Peng
    Li, Junjian
    Zhang, Anqing
    Liu, Weifeng
    Jiang, Nan
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 4846 - 4858
  • [37] Provable training set debugging for linear regression
    Xiaomin Zhang
    Xiaojin Zhu
    Po-Ling Loh
    Machine Learning, 2021, 110 : 2763 - 2834
  • [38] AdvDiff: Generating Unrestricted Adversarial Examples Using Diffusion Models
    Dai, Xuelong
    Liang, Kaisheng
    Xiao, Bin
    COMPUTER VISION-ECCV 2024, PT XLVI, 2025, 15104 : 93 - 109
  • [39] Modeling Adversarial Noise for Adversarial Training
    Zhou, Dawei
    Wang, Nannan
    Han, Bo
    Liu, Tongliang
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [40] On the Minimal Adversarial Perturbation for Deep Neural Networks With Provable Estimation Error
    Brau, Fabio
    Rossolini, Giulio
    Biondi, Alessandro
    Buttazzo, Giorgio
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (04) : 5038 - 5052