Provable Unrestricted Adversarial Training Without Compromise With Generalizability

被引:1
|
作者
Zhang, Lilin [1 ]
Yang, Ning [1 ]
Sun, Yanchao [2 ]
Yu, Philip S. [3 ]
机构
[1] Sichuan Univ, Sch Comp Sci, Chengdu 610017, Peoples R China
[2] Univ Maryland, Dept Comp Sci, College Pk, MD 20742 USA
[3] Univ Illinois, Dept Comp Sci, Chicago, IL 60607 USA
基金
中国国家自然科学基金;
关键词
Robustness; Training; Standards; Perturbation methods; Stars; Optimization; Computer science; Adversarial robustness; adversarial training; unrestricted adversarial examples; standard generalizability;
D O I
10.1109/TPAMI.2024.3400988
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial training (AT) is widely considered as the most promising strategy to defend against adversarial attacks and has drawn increasing interest from researchers. However, the existing AT methods still suffer from two challenges. First, they are unable to handle unrestricted adversarial examples (UAEs), which are built from scratch, as opposed to restricted adversarial examples (RAEs), which are created by adding perturbations bound by an l(p) norm to observed examples. Second, the existing AT methods often achieve adversarial robustness at the expense of standard generalizability (i.e., the accuracy on natural examples) because they make a tradeoff between them. To overcome these challenges, we propose a unique viewpoint that understands UAEs as imperceptibly perturbed unobserved examples. Also, we find that the tradeoff results from the separation of the distributions of adversarial examples and natural examples. Based on these ideas, we propose a novel AT approach called Provable Unrestricted Adversarial Training (PUAT), which can provide a target classifier with comprehensive adversarial robustness against both UAE and RAE, and simultaneously improve its standard generalizability. Particularly, PUAT utilizes partially labeled data to achieve effective UAE generation by accurately capturing the natural data distribution through a novel augmented triple-GAN. At the same time, PUAT extends the traditional AT by introducing the supervised loss of the target classifier into the adversarial loss and achieves the alignment between the UAE distribution, the natural data distribution, and the distribution learned by the classifier, with the collaboration of the augmented triple-GAN. Finally, the solid theoretical analysis and extensive experiments conducted on widely-used benchmarks demonstrate the superiority of PUAT.
引用
收藏
页码:8302 / 8319
页数:18
相关论文
共 50 条
  • [21] Towards Transferable Unrestricted Adversarial Examples with Minimum Changes
    Liu, Fangcheng
    Zhang, Chao
    Zhang, Hongyang
    2023 IEEE CONFERENCE ON SECURE AND TRUSTWORTHY MACHINE LEARNING, SATML, 2023, : 327 - 338
  • [22] EGM: An Efficient Generative Model for Unrestricted Adversarial Examples
    Xiang, Tao
    Liu, Hangcheng
    Guo, Shangwei
    Gan, Yan
    Liao, Xiaofeng
    ACM TRANSACTIONS ON SENSOR NETWORKS, 2022, 18 (04)
  • [23] Generating unrestricted adversarial examples via three parameteres
    Hanieh Naderi
    Leili Goli
    Shohreh Kasaei
    Multimedia Tools and Applications, 2022, 81 : 21919 - 21938
  • [24] Generating unrestricted adversarial examples via three parameteres
    Naderi, Hanieh
    Goli, Leili
    Kasaei, Shohreh
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (15) : 21919 - 21938
  • [25] Provable Training for Graph Contrastive Learning
    Yu, Yue
    Wang, Xiao
    Zhang, Mengmei
    Liu, Nian
    Shi, Chuan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [26] Provable Certificates for Adversarial Examples: Fitting a Ball in the Union of Polytopes
    Jordan, Matt
    Lewis, Justin
    Dimakis, Alexandros G.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [27] Calendering without compromise
    Isler, Walter
    International Paperworld IPW, 2013, (09): : 28 - 29
  • [28] Efficient Semi-Supervised Adversarial Training without Guessing Labels
    Wu, Huimin
    Vazelhes, William
    Gu, Bin
    2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2022, : 538 - 547
  • [30] Creation Without Compromise
    Panken, Ted
    DOWN BEAT, 2017, 84 (11): : 28 - 28