Improving Adversarial Robustness With Adversarial Augmentations

被引:2
|
作者
Chen, Chuanxi [1 ,2 ]
Ye, Dengpan [1 ,2 ]
He, Yiheng [1 ,2 ]
Tang, Long [1 ,2 ]
Xu, Yue [1 ,2 ]
机构
[1] Wuhan Univ, Key Lab Aerosp Informat Secur & Trusted Comp, Minist Educ, Wuhan 430072, Peoples R China
[2] Wuhan Univ, Sch Cyber Sci & Engn, Wuhan 430072, Peoples R China
基金
中国国家自然科学基金;
关键词
Training; Robustness; Internet of Things; Security; Perturbation methods; Feature extraction; Data augmentation; Adversarial robustness; augmentations; contrastive learning (CL); deep neural networks (DNNs); Internet of Things (IoT) security;
D O I
10.1109/JIOT.2023.3301608
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural network (DNN)-based applications are extensively being researched and applied in the Internet of Things (IoT) devices in daily lives due to impressive performance. Recently, adversarial attacks pose a significant threat to the security of deep neural networks (DNNs), adversarial training has emerged as a promising and effective defense approach for defending against such attacks. However, existing adversarial training methods have shown limited success in defending against attacks unseen during training, thereby undermining their effectiveness. Besides, generating adversarial perturbations for adversarial training requires massive expensive labeled data, which is a critical obstacle in the robust DNNs-based IoT applications. In this article, we first explore the effective data augmentations by implementing adversarial attacks with self-supervised in latent space. Then, we propose new loss metric functions that can avoid collapse phenomenon of contrastive learning (CL) by measuring the distances between adversarial augmented pairs. Based on the extracted adversarial features in self-supervised CL, we propose a novel adversarial robust learning (ARL) method, which implements adversarial training without any labels and obtains more general robust encoder network. Our approach is validated on commonly used benchmark data sets and models, where it achieves comparable adversarial robustness against different adversarial attacks when compared to supervised adversarial training methods. Additionally, ARL outperforms state-of-the-art self-supervised adversarial learning techniques in terms of achieving higher robustness and clean prediction accuracy for the downstream classification task.
引用
下载
收藏
页码:5105 / 5117
页数:13
相关论文
共 50 条
  • [1] AugLy: Data Augmentations for Adversarial Robustness
    Papakipos, Zoe
    Bitton, Joanna
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 155 - 162
  • [2] EXPLOITING DOUBLY ADVERSARIAL EXAMPLES FOR IMPROVING ADVERSARIAL ROBUSTNESS
    Byun, Junyoung
    Go, Hyojun
    Cho, Seungju
    Kim, Changick
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 1331 - 1335
  • [3] Sliced Wasserstein adversarial training for improving adversarial robustness
    Lee W.
    Lee S.
    Kim H.
    Lee J.
    Journal of Ambient Intelligence and Humanized Computing, 2024, 15 (08) : 3229 - 3242
  • [4] Improving the robustness of steganalysis in the adversarial environment with Generative Adversarial Network
    Peng, Ye
    Yu, Qi
    Fu, Guobin
    Zhang, WenWen
    Duan, ChaoFan
    JOURNAL OF INFORMATION SECURITY AND APPLICATIONS, 2024, 82
  • [5] Improving Adversarial Robustness via Attention and Adversarial Logit Pairing
    Li, Xingjian
    Goodman, Dou
    Liu, Ji
    Wei, Tao
    Dou, Dejing
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2022, 4
  • [6] Feature Denoising for Improving Adversarial Robustness
    Xie, Cihang
    Wu, Yuxin
    van der Maaten, Laurens
    Yuille, Alan
    He, Kaiming
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 501 - 509
  • [7] Are Labels Required for Improving Adversarial Robustness?
    Uesato, Jonathan
    Alayrac, Jean-Baptiste
    Huang, Po-Sen
    Stanforth, Robert
    Fawzi, Alhussein
    Kohli, Pushmeet
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [8] Challenges of Adversarial Image Augmentations
    Blaas, Arno
    Suau, Xavier
    Ramapuram, Jason
    Apostoloff, Nicholas
    Zappella, Luca
    WORKSHOP AT NEURIPS 2021, VOL 163, 2021, 163 : 9 - 14
  • [9] Between-Class Adversarial Training for Improving Adversarial Robustness of Image Classification
    Wang, Desheng
    Jin, Weidong
    Wu, Yunpu
    SENSORS, 2023, 23 (06)
  • [10] Parseval Networks: Improving Robustness to Adversarial Examples
    Cisse, Moustapha
    Bojanowski, Piotr
    Grave, Edouard
    Dauphin, Yann
    Usunier, Nicolas
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70