Robust Pre-Training by Adversarial Contrastive Learning

被引:0
|
作者
Jiang, Ziyu [1 ]
Chen, Tianlong [2 ]
Chen, Ting [3 ]
Wang, Zhangyang [2 ]
机构
[1] Texas A&M Univ, College Stn, TX 77843 USA
[2] Univ Texas Austin, Austin, TX USA
[3] Google Res, Brain Team, Mountain View, CA USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent work has shown that, when integrated with adversarial training, self-supervised pre-training can lead to state-of-the-art robustness [1]. In this work, we improve robustness-aware self-supervised pre-training by learning representations that are consistent under both data augmentations and adversarial perturbations. Our approach leverages a recent contrastive learning framework [2], which learns representations by maximizing feature consistency under differently augmented views. This fits particularly well with the goal of adversarial robustness, as one cause of adversarial fragility is the lack of feature invariance, i.e., small input perturbations can result in undesirable large changes in features or even predicted labels. We explore various options to formulate the contrastive task, and demonstrate that by injecting adversarial perturbations, contrastive pre-training can lead to models that are both label-efficient and robust. We empirically evaluate the proposed Adversarial Contrastive Learning (ACL) and show it can consistently outperform existing methods. For example on the CIFAR-10 dataset, ACL outperforms the previous state-of-the-art unsupervised robust pre-training approach [1] by 2.99% on robust accuracy and 2.14% on standard accuracy. We further demonstrate that ACL pre-training can improve semi-supervised adversarial training, even when only a few labeled examples are available. Our codes and pre-trained models have been released at: https://github.com/VITA-Group/Adversarial-Contrastive-Learning.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Adversarial momentum-contrastive pre-training
    Xu, Cong
    Li, Dan
    Yang, Min
    [J]. PATTERN RECOGNITION LETTERS, 2022, 160 : 172 - 179
  • [2] Contrastive Pre-training with Adversarial Perturbations for Check-in Sequence Representation Learning
    Gong, Letian
    Lin, Youfang
    Guo, Shengnan
    Lin, Yan
    Wang, Tianyi
    Zheng, Erwen
    Zhou, Zeyu
    Wan, Huaiyu
    [J]. THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 4, 2023, : 4276 - 4283
  • [3] New Intent Discovery with Pre-training and Contrastive Learning
    Zhang, Yuwei
    Zhang, Haode
    Zhan, Li-Ming
    Wu, Xiao-Ming
    Lam, Albert Y. S.
    [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 256 - 269
  • [4] Image Difference Captioning with Pre-training and Contrastive Learning
    Yao, Linli
    Wang, Weiying
    Jin, Qin
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 3108 - 3116
  • [5] Supervised contrastive learning for robust text adversarial training
    Weidong Li
    Bo Zhao
    Yang An
    Chenhan Shangguan
    Minzi Ji
    Anqi Yuan
    [J]. Neural Computing and Applications, 2023, 35 : 7357 - 7368
  • [6] Supervised contrastive learning for robust text adversarial training
    Li, Weidong
    Zhao, Bo
    An, Yang
    Shangguan, Chenhan
    Ji, Minzi
    Yuan, Anqi
    [J]. NEURAL COMPUTING & APPLICATIONS, 2023, 35 (10): : 7357 - 7368
  • [7] PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning
    Liu, Hongbin
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    [J]. PROCEEDINGS OF THE 31ST USENIX SECURITY SYMPOSIUM, 2022, : 3629 - 3645
  • [8] Multilingual Molecular Representation Learning via Contrastive Pre-training
    Guo, Zhihui
    Sharma, Pramod
    Martinez, Andy
    Du, Liang
    Abraham, Robin
    [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 3441 - 3453
  • [9] A Contrastive Learning Pre-Training Method for Motif Occupancy Identification
    Lin, Ken
    Quan, Xiongwen
    Yin, Wenya
    Zhang, Han
    [J]. INTERNATIONAL JOURNAL OF MOLECULAR SCIENCES, 2022, 23 (09)
  • [10] Active Learning with Contrastive Pre-training for Facial Expression Recognition
    Roy, Shuvendu
    Etemad, Ali
    [J]. 2023 11TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION, ACII, 2023,