Stylized Adversarial Defense

被引:3
|
作者
Naseer, Muzammal [1 ,2 ]
Khan, Salman [1 ,2 ]
Hayat, Munawar [3 ]
Khan, Fahad Shahbaz [1 ,4 ,5 ]
Porikli, Fatih [6 ]
机构
[1] Mohamed bin Zayed Univ Artificial Intelligence, Abu Dhabi, U Arab Emirates
[2] Australian Natl Univ, Canberra, ACT 2601, Australia
[3] Monash Univ, Clayton, Vic 3800, Australia
[4] Mohamed bin Zayed Univ Artificial Intelligence, Masdar, Abu Dhabi, U Arab Emirates
[5] Linkoping Univ, S-58183 Linkoping, Sweden
[6] Qualcomm, San Diego, CA 92121 USA
关键词
Training; Perturbation methods; Robustness; Multitasking; Predictive models; Computational modeling; Visualization; Adversarial training; style transfer; max-margin learning; adversarial attacks; multi-task objective;
D O I
10.1109/TPAMI.2022.3207917
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep Convolution Neural Networks (CNNs) can easily be fooled by subtle, imperceptible changes to the input images. To address this vulnerability, adversarial training creates perturbation patterns and includes them in the training set to robustify the model. In contrast to existing adversarial training methods that only use class-boundary information (e.g., using a cross-entropy loss), we propose to exploit additional information from the feature space to craft stronger adversaries that are in turn used to learn a robust model. Specifically, we use the style and content information of the target sample from another class, alongside its class-boundary information to create adversarial perturbations. We apply our proposed multi-task objective in a deeply supervised manner, extracting multi-scale feature knowledge to create maximally separating adversaries. Subsequently, we propose a max-margin adversarial training approach that minimizes the distance between source image and its adversary and maximizes the distance between the adversary and the target image. Our adversarial training approach demonstrates strong robustness compared to state-of-the-art defenses, generalizes well to naturally occurring corruptions and data distributional shifts, and retains the model's accuracy on clean examples.
引用
收藏
页码:6403 / 6414
页数:12
相关论文
共 50 条
  • [31] (AD)2: Adversarial domain adaptation to defense with adversarial perturbation removal
    Han, Keji
    Xia, Bin
    Li, Yun
    PATTERN RECOGNITION, 2022, 122
  • [32] Cycle-Consistent Adversarial GAN: The Integration of Adversarial Attack and Defense
    Jiang, Lingyun
    Qiao, Kai
    Qin, Ruoxi
    Wang, Linyuan
    Yu, Wanting
    Chen, Jian
    Bu, Haibing
    Yan, Bin
    SECURITY AND COMMUNICATION NETWORKS, 2020, 2020 (2020)
  • [33] Adversarial Training Defense Based on Second-order Adversarial Examples
    Qian Yaguan
    Zhang Ximin
    Wang Bin
    Gu Zhaoquan
    Li Wei
    Yun Bensheng
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2021, 43 (11) : 3367 - 3373
  • [34] Efficient Adversarial Defense without Adversarial Training: A Batch Normalization Approach
    Zhu, Yao
    Wei, Xiao
    Zhu, Yue
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [35] The Best Defense is a Good Offense: Adversarial Augmentation against Adversarial Attacks
    Frosio, Iuri
    Kautz, Jan
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 4067 - 4076
  • [36] Defense Against Adversarial Attacks Using Topology Aligning Adversarial Training
    Kuang, Huafeng
    Liu, Hong
    Lin, Xianming
    Ji, Rongrong
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 3659 - 3673
  • [37] Defense-VAE: A Fast and Accurate Defense Against Adversarial Attacks
    Li, Xiang
    Ji, Shihao
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2019, PT II, 2020, 1168 : 191 - 207
  • [38] Adversarial Deep Learning: A Survey on Adversarial Attacks and Defense Mechanisms on Image Classification
    Khamaiseh, Samer Y.
    Bagagem, Derek
    Al-Alaj, Abdullah
    Mancino, Mathew
    Alomari, Hakam W.
    IEEE ACCESS, 2022, 10 : 102266 - 102291
  • [39] Adversarial Attacks and Defense on an Aircraft Classification Model Using a Generative Adversarial Network
    Colter, Jamison
    Kinnison, Matthew
    Henderson, Alex
    Harbour, Steven
    2023 IEEE/AIAA 42ND DIGITAL AVIONICS SYSTEMS CONFERENCE, DASC, 2023,
  • [40] Open-Set Adversarial Defense with Clean-Adversarial Mutual Learning
    Rui Shao
    Pramuditha Perera
    Pong C. Yuen
    Vishal M. Patel
    International Journal of Computer Vision, 2022, 130 : 1070 - 1087