Stylized Adversarial Defense

被引:3
|
作者
Naseer, Muzammal [1 ,2 ]
Khan, Salman [1 ,2 ]
Hayat, Munawar [3 ]
Khan, Fahad Shahbaz [1 ,4 ,5 ]
Porikli, Fatih [6 ]
机构
[1] Mohamed bin Zayed Univ Artificial Intelligence, Abu Dhabi, U Arab Emirates
[2] Australian Natl Univ, Canberra, ACT 2601, Australia
[3] Monash Univ, Clayton, Vic 3800, Australia
[4] Mohamed bin Zayed Univ Artificial Intelligence, Masdar, Abu Dhabi, U Arab Emirates
[5] Linkoping Univ, S-58183 Linkoping, Sweden
[6] Qualcomm, San Diego, CA 92121 USA
关键词
Training; Perturbation methods; Robustness; Multitasking; Predictive models; Computational modeling; Visualization; Adversarial training; style transfer; max-margin learning; adversarial attacks; multi-task objective;
D O I
10.1109/TPAMI.2022.3207917
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Convolution Neural Networks (CNNs) can easily be fooled by subtle, imperceptible changes to the input images. To address this vulnerability, adversarial training creates perturbation patterns and includes them in the training set to robustify the model. In contrast to existing adversarial training methods that only use class-boundary information (e.g., using a cross-entropy loss), we propose to exploit additional information from the feature space to craft stronger adversaries that are in turn used to learn a robust model. Specifically, we use the style and content information of the target sample from another class, alongside its class-boundary information to create adversarial perturbations. We apply our proposed multi-task objective in a deeply supervised manner, extracting multi-scale feature knowledge to create maximally separating adversaries. Subsequently, we propose a max-margin adversarial training approach that minimizes the distance between source image and its adversary and maximizes the distance between the adversary and the target image. Our adversarial training approach demonstrates strong robustness compared to state-of-the-art defenses, generalizes well to naturally occurring corruptions and data distributional shifts, and retains the model's accuracy on clean examples.
引用
收藏
页码:6403 / 6414
页数:12
相关论文
共 50 条
  • [1] Stylized Pairing for Robust Adversarial Defense
    Guan, Dejian
    Zhao, Wentao
    Liu, Xiao
    APPLIED SCIENCES-BASEL, 2022, 12 (18):
  • [2] Stylized Adversarial AutoEncoder for Image Generation
    Zhao, Yiru
    Deng, Bing
    Huang, Jianqiang
    Lu, Hongtao
    Hua, Xian-Sheng
    PROCEEDINGS OF THE 2017 ACM MULTIMEDIA CONFERENCE (MM'17), 2017, : 244 - 251
  • [3] Stylized Crowd Formation Transformation Through Spatiotemporal Adversarial Learning
    Yan, Dapeng
    Huang, Kexiang
    Zhang, Longfei
    Ding, Gang Yi
    ADVANCED INTELLIGENT SYSTEMS, 2024, 6 (03)
  • [4] Towards Generating Stylized Image Captions via Adversarial Training
    Nezami, Omid Mohamad
    Dras, Mark
    Wan, Stephen
    Paris, Cecile
    Hamey, Len
    PRICAI 2019: TRENDS IN ARTIFICIAL INTELLIGENCE, PT I, 2019, 11670 : 270 - 284
  • [5] Uncouple Generative Adversarial Networks for Transferring Stylized Portraits to Realistic Faces
    Wang, Wenxiao
    Wong, Hon-Cheng
    Lo, Sio-Long
    Zhang, Guifang
    IEEE ACCESS, 2020, 8 : 213825 - 213839
  • [6] Text Adversarial Purification as Defense against Adversarial Attacks
    Li, Linyang
    Song, Demin
    Qiu, Xipeng
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 338 - 350
  • [7] Variational Adversarial Defense: A Bayes Perspective for Adversarial Training
    Zhao, Chenglong
    Mei, Shibin
    Ni, Bingbing
    Yuan, Shengchao
    Yu, Zhenbo
    Wang, Jun
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (05) : 3047 - 3063
  • [8] The Defense of Adversarial Example with Conditional Generative Adversarial Networks
    Yu, Fangchao
    Wang, Li
    Fang, Xianjin
    Zhang, Youwen
    SECURITY AND COMMUNICATION NETWORKS, 2020, 2020
  • [9] Sinkhorn Adversarial Attack and Defense
    Subramanyam, A. V.
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 4039 - 4049
  • [10] Adversarial Attack and Defense: A Survey
    Liang, Hongshuo
    He, Erlu
    Zhao, Yangyang
    Jia, Zhe
    Li, Hao
    ELECTRONICS, 2022, 11 (08)