BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning

被引:0
|
作者
Jia, Jinyuan [1 ]
Liu, Yupei [1 ]
Gong, Neil Zhenqiang [1 ]
机构
[1] Duke Univ, Durham, NC 27706 USA
关键词
D O I
10.1109/SP46214.2022.00021
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Self-supervised learning in computer vision aims to pre-train an image encoder using a large amount of unlabeled images or (image, text) pairs. The pre-trained image encoder can then be used as a feature extractor to build downstream classifiers for many downstream tasks with a small amount of or no labeled training data. In this work, we propose BadEncoder, the first backdoor attack to self-supervised learning. In particular, our BadEncoder injects backdoors into a pre-trained image encoder such that the downstream classifiers built based on the backdoored image encoder for different downstream tasks simultaneously inherit the backdoor behavior. We formulate our BadEncoder as an optimization problem and we propose a gradient descent based method to solve it, which produces a backdoored image encoder from a clean one. Our extensive empirical evaluation results on multiple datasets show that our BadEncoder achieves high attack success rates while preserving the accuracy of the downstream classifiers. We also show the effectiveness of BadEncoder using two publicly available, realworld image encoders, i.e., Google's image encoder pre-trained on ImageNet and OpenAI's Contrastive Language-Image Pre-training (CLIP) image encoder pre-trained on 400 million (image, text) pairs collected from the Internet. Moreover, we consider defenses including Neural Cleanse and MNTD (empirical defenses) as well as PatchGuard (a provable defense). Our results show that these defenses are insufficient to defend against BadEncoder, highlighting the needs for new defenses against our BadEncoder. Our code is publicly available at: https://github.com/jjy1994/BadEncoder.
引用
收藏
页码:2043 / 2059
页数:17
相关论文
共 50 条
  • [31] EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning
    Liu, Hongbin
    Jia, Jinyuan
    Qu, Wenjie
    Gong, Neil Zhenqiang
    [J]. CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 2081 - 2095
  • [32] CBAs: Character-level Backdoor Attacks against Chinese Pre-trained Language Models
    He, Xinyu
    Hao, Fengrui
    Gu, Tianlong
    Chang, Liang
    [J]. ACM TRANSACTIONS ON PRIVACY AND SECURITY, 2024, 27 (03)
  • [33] Interpretabilty of Speech Emotion Recognition modelled using Self-Supervised Speech and Text Pre-Trained Embeddings
    Girish, K. V. Vijay
    Konjeti, Srikanth
    Vepa, Jithendra
    [J]. INTERSPEECH 2022, 2022, : 4496 - 4500
  • [34] SSCLNet: A Self-Supervised Contrastive Loss-Based Pre-Trained Network for Brain MRI Classification
    Mishra, Animesh
    Jha, Ritesh
    Bhattacharjee, Vandana
    [J]. IEEE ACCESS, 2023, 11 : 6673 - 6681
  • [35] Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks
    Xi, Zhaohan
    Du, Tianyu
    Li, Changjiang
    Pang, Ren
    Ji, Shouling
    Chen, Jinghui
    Ma, Fenglong
    Wang, Ting
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [36] Red Alarm for Pre-trained Models: Universal Vulnerability to Neuron-level Backdoor Attacks
    Zhang, Zhengyan
    Xiao, Guangxuan
    Li, Yongwei
    Lv, Tian
    Qi, Fanchao
    Liu, Zhiyuan
    Wang, Yasheng
    Jiang, Xin
    Sun, Maosong
    [J]. MACHINE INTELLIGENCE RESEARCH, 2023, 20 (02) : 180 - 193
  • [37] Backdoor Pre-trained Models Can Transfer to All
    Shen, Lujia
    Ji, Shouling
    Zhang, Xuhong
    Li, Jinfeng
    Chen, Jing
    Shi, Jie
    Fang, Chengfang
    Yin, Jianwei
    Wang, Ting
    [J]. CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 3141 - 3158
  • [38] Microstructure segmentation with deep learning encoders pre-trained on a large microscopy dataset
    Joshua Stuckner
    Bryan Harder
    Timothy M. Smith
    [J]. npj Computational Materials, 8
  • [39] Microstructure segmentation with deep learning encoders pre-trained on a large microscopy dataset
    Stuckner, Joshua
    Harder, Bryan
    Smith, Timothy M.
    [J]. NPJ COMPUTATIONAL MATERIALS, 2022, 8 (01)
  • [40] Training Set Cleansing of Backdoor Poisoning by Self-Supervised Representation Learning
    Wang, Hang
    Karami, Sahar
    Dia, Ousmane
    Ritter, Hippolyt
    Emamjomeh-Zadeh, Ehsan
    Chen, Jiahui
    Xiang, Zhen
    Miller, David J.
    Kesidis, George
    [J]. ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, 2023,