Improving Adversarial Robustness With Adversarial Augmentations

被引:2
|
作者
Chen, Chuanxi [1 ,2 ]
Ye, Dengpan [1 ,2 ]
He, Yiheng [1 ,2 ]
Tang, Long [1 ,2 ]
Xu, Yue [1 ,2 ]
机构
[1] Wuhan Univ, Key Lab Aerosp Informat Secur & Trusted Comp, Minist Educ, Wuhan 430072, Peoples R China
[2] Wuhan Univ, Sch Cyber Sci & Engn, Wuhan 430072, Peoples R China
基金
中国国家自然科学基金;
关键词
Training; Robustness; Internet of Things; Security; Perturbation methods; Feature extraction; Data augmentation; Adversarial robustness; augmentations; contrastive learning (CL); deep neural networks (DNNs); Internet of Things (IoT) security;
D O I
10.1109/JIOT.2023.3301608
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural network (DNN)-based applications are extensively being researched and applied in the Internet of Things (IoT) devices in daily lives due to impressive performance. Recently, adversarial attacks pose a significant threat to the security of deep neural networks (DNNs), adversarial training has emerged as a promising and effective defense approach for defending against such attacks. However, existing adversarial training methods have shown limited success in defending against attacks unseen during training, thereby undermining their effectiveness. Besides, generating adversarial perturbations for adversarial training requires massive expensive labeled data, which is a critical obstacle in the robust DNNs-based IoT applications. In this article, we first explore the effective data augmentations by implementing adversarial attacks with self-supervised in latent space. Then, we propose new loss metric functions that can avoid collapse phenomenon of contrastive learning (CL) by measuring the distances between adversarial augmented pairs. Based on the extracted adversarial features in self-supervised CL, we propose a novel adversarial robust learning (ARL) method, which implements adversarial training without any labels and obtains more general robust encoder network. Our approach is validated on commonly used benchmark data sets and models, where it achieves comparable adversarial robustness against different adversarial attacks when compared to supervised adversarial training methods. Additionally, ARL outperforms state-of-the-art self-supervised adversarial learning techniques in terms of achieving higher robustness and clean prediction accuracy for the downstream classification task.
引用
收藏
页码:5105 / 5117
页数:13
相关论文
共 50 条
  • [21] Improving the Adversarial Robustness of Object Detection with Contrastive Learning
    Zeng, Weiwei
    Gao, Song
    Zhou, Wei
    Dong, Yunyun
    Wang, Ruxin
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT IX, 2024, 14433 : 29 - 40
  • [22] Improving Adversarial Robustness via Information Bottleneck Distillation
    Kuang, Huafeng
    Liu, Hong
    Wu, YongJian
    Satoh, Shin'ichi
    Ji, Rongrong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [23] Improving Adversarial Robustness via Promoting Ensemble Diversity
    Pang, Tianyu
    Xu, Kun
    Du, Chao
    Chen, Ning
    Zhu, Jun
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [24] Improving Robustness of Jet Tagging Algorithms with Adversarial Training
    Stein A.
    Coubez X.
    Mondal S.
    Novak A.
    Schmidt A.
    Computing and Software for Big Science, 2022, 6 (1)
  • [25] Improving Adversarial Robustness of CNNs via Maximum Margin
    Wu, Jiaping
    Xia, Zhaoqiang
    Feng, Xiaoyi
    APPLIED SCIENCES-BASEL, 2022, 12 (15):
  • [26] A SIMPLE STOCHASTIC NEURAL NETWORK FOR IMPROVING ADVERSARIAL ROBUSTNESS
    Yang, Hao
    Wang, Min
    Yu, Zhengfei
    Zhou, Yun
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 2297 - 2302
  • [27] Improving Adversarial Robustness via Mutual Information Estimation
    Zhou, Dawei
    Wang, Nannan
    Gao, Xinbo
    Han, Bo
    Wang, Xiaoyu
    Zhan, Yibing
    Liu, Tongliang
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [28] Adversarial attacks and adversarial robustness in computational pathology
    Narmin Ghaffari Laleh
    Daniel Truhn
    Gregory Patrick Veldhuizen
    Tianyu Han
    Marko van Treeck
    Roman D. Buelow
    Rupert Langer
    Bastian Dislich
    Peter Boor
    Volkmar Schulz
    Jakob Nikolas Kather
    Nature Communications, 13
  • [29] Adversarial attacks and adversarial robustness in computational pathology
    Ghaffari Laleh, Narmin
    Truhn, Daniel
    Veldhuizen, Gregory Patrick
    Han, Tianyu
    van Treeck, Marko
    Buelow, Roman D.
    Langer, Rupert
    Dislich, Bastian
    Boor, Peter
    Schulz, Volkmar
    Kather, Jakob Nikolas
    NATURE COMMUNICATIONS, 2022, 13 (01)
  • [30] Recent Advances in Adversarial Training for Adversarial Robustness
    Bai, Tao
    Luo, Jinqi
    Zhao, Jun
    Wen, Bihan
    Wang, Qian
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 4312 - 4321