Improving Adversarial Robustness With Adversarial Augmentations

被引:2
|
作者
Chen, Chuanxi [1 ,2 ]
Ye, Dengpan [1 ,2 ]
He, Yiheng [1 ,2 ]
Tang, Long [1 ,2 ]
Xu, Yue [1 ,2 ]
机构
[1] Wuhan Univ, Key Lab Aerosp Informat Secur & Trusted Comp, Minist Educ, Wuhan 430072, Peoples R China
[2] Wuhan Univ, Sch Cyber Sci & Engn, Wuhan 430072, Peoples R China
基金
中国国家自然科学基金;
关键词
Training; Robustness; Internet of Things; Security; Perturbation methods; Feature extraction; Data augmentation; Adversarial robustness; augmentations; contrastive learning (CL); deep neural networks (DNNs); Internet of Things (IoT) security;
D O I
10.1109/JIOT.2023.3301608
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural network (DNN)-based applications are extensively being researched and applied in the Internet of Things (IoT) devices in daily lives due to impressive performance. Recently, adversarial attacks pose a significant threat to the security of deep neural networks (DNNs), adversarial training has emerged as a promising and effective defense approach for defending against such attacks. However, existing adversarial training methods have shown limited success in defending against attacks unseen during training, thereby undermining their effectiveness. Besides, generating adversarial perturbations for adversarial training requires massive expensive labeled data, which is a critical obstacle in the robust DNNs-based IoT applications. In this article, we first explore the effective data augmentations by implementing adversarial attacks with self-supervised in latent space. Then, we propose new loss metric functions that can avoid collapse phenomenon of contrastive learning (CL) by measuring the distances between adversarial augmented pairs. Based on the extracted adversarial features in self-supervised CL, we propose a novel adversarial robust learning (ARL) method, which implements adversarial training without any labels and obtains more general robust encoder network. Our approach is validated on commonly used benchmark data sets and models, where it achieves comparable adversarial robustness against different adversarial attacks when compared to supervised adversarial training methods. Additionally, ARL outperforms state-of-the-art self-supervised adversarial learning techniques in terms of achieving higher robustness and clean prediction accuracy for the downstream classification task.
引用
收藏
页码:5105 / 5117
页数:13
相关论文
共 50 条
  • [31] ATGAN: Adversarial training-based GAN for improving adversarial robustness generalization on image classification
    Desheng Wang
    Weidong Jin
    Yunpu Wu
    Aamir Khan
    Applied Intelligence, 2023, 53 : 24492 - 24508
  • [32] ATGAN: Adversarial training-based GAN for improving adversarial robustness generalization on image classification
    Wang, Desheng
    Jin, Weidong
    Wu, Yunpu
    Khan, Aamir
    APPLIED INTELLIGENCE, 2023, 53 (20) : 24492 - 24508
  • [33] Improving adversarial robustness of Bayesian neural networks via multi-task adversarial training
    Chen, Xu
    Liu, Chuancai
    Zhao, Yue
    Jia, Zhiyang
    Jin, Ge
    INFORMATION SCIENCES, 2022, 592 : 156 - 173
  • [34] Adversarial Robustness Curves
    Goepfert, Christina
    Goepfert, Jan Philip
    Hammer, Barbara
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2019, PT I, 2020, 1167 : 172 - 179
  • [35] Adversarial Robustness for Code
    Bielik, Pavol
    Vechev, Martin
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [36] The Adversarial Robustness of Sampling
    Ben-Eliezer, Omri
    Yogev, Eylon
    PODS'20: PROCEEDINGS OF THE 39TH ACM SIGMOD-SIGACT-SIGAI SYMPOSIUM ON PRINCIPLES OF DATABASE SYSTEMS, 2020, : 49 - 62
  • [37] Adversarial Minimax Training for Robustness Against Adversarial Examples
    Komiyama, Ryota
    Hattori, Motonobu
    NEURAL INFORMATION PROCESSING (ICONIP 2018), PT II, 2018, 11302 : 690 - 699
  • [38] Connecting the Digital and Physical World: Improving the Robustness of Adversarial Attacks
    Jan, Steve T. K.
    Messou, Joseph
    Lin, Yen-Chen
    Huang, Jia-Bin
    Wang, Gang
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 962 - 969
  • [39] Towards Improving Robustness of Deep Neural Networks to Adversarial Perturbations
    Amini, Sajjad
    Ghaemmaghami, Shahrokh
    IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (07) : 1889 - 1903
  • [40] Class-aware domain adaptation for improving adversarial robustness
    Hou, Xianxu
    Liu, Jingxin
    Xu, Bolei
    Wang, Xiaolong
    Liu, Bozhi
    Qiu, Guoping
    IMAGE AND VISION COMPUTING, 2020, 99 (99)