Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning

被引:0
|
作者
Liu, Hongbin [1 ]
Qu, Wenjie [2 ]
Jia, Jinyuan [3 ]
Gong, Neil Zhenqiang [1 ]
机构
[1] Duke Univ, Durham, NC 27706 USA
[2] Natl Univ Singapore, Singapore, Singapore
[3] Penn State Univ, University Pk, PA USA
关键词
ROBUSTNESS; ATTACKS;
D O I
10.1109/SPW63631.2024.00019
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Classifiers in supervised learning have various security and privacy issues, e.g., 1) data poisoning attacks, backdoor attacks, and adversarial examples on the security side as well as 2) inference attacks to the training data on the privacy side. Various secure and privacy-preserving supervised learning algorithms with formal guarantees have been proposed to address these issues. However, they suffer from various limitations such as accuracy loss, small certified security guarantees, and/or inefficiency. Self-supervised learning pre-trains encoders using unlabeled data. Given a pre-trained encoder as a feature extractor, supervised learning can train a simple yet accurate classifier using a small amount of labeled training data. In this work, we perform the first systematic, principled measurement study to understand whether and when a pre-trained encoder can address the limitations of secure or privacy-preserving supervised learning algorithms. Our key findings are that a pre-trained encoder substantially improves 1) both accuracy under no attacks and certified security guarantees against data poisoning and backdoor attacks of state-of-the-art secure learning algorithms (i.e., bagging and KNN), 2) certified security guarantees of randomized smoothing against adversarial examples without sacrificing its accuracy under no attacks, 3) accuracy of differentially private classifiers.
引用
收藏
页码:144 / 156
页数:13
相关论文
共 50 条
  • [1] BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning
    Jia, Jinyuan
    Liu, Yupei
    Gong, Neil Zhenqiang
    [J]. 43RD IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP 2022), 2022, : 2043 - 2059
  • [2] GhostEncoder: Stealthy backdoor attacks with dynamic triggers to pre-trained encoders in self-supervised learning
    Wang, Qiannan
    Yin, Changchun
    Fang, Liming
    Liu, Zhe
    Wang, Run
    Lin, Chenhao
    [J]. COMPUTERS & SECURITY, 2024, 142
  • [4] Mitigating Backdoor Attacks in Pre-Trained Encoders via Self-Supervised Knowledge Distillation
    Bie, Rongfang
    Jiang, Jinxiu
    Xie, Hongcheng
    Guo, Yu
    Miao, Yinbin
    Jia, Xiaohua
    [J]. IEEE Transactions on Services Computing, 2024, 17 (05): : 2613 - 2625
  • [5] Enhancing Pre-trained Language Models by Self-supervised Learning for Story Cloze Test
    Xie, Yuqiang
    Hu, Yue
    Xing, Luxi
    Wang, Chunhui
    Hu, Yong
    Wei, Xiangpeng
    Sun, Yajing
    [J]. KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT (KSEM 2020), PT I, 2020, 12274 : 271 - 279
  • [6] Self-supervised Learning Based on a Pre-trained Method for the Subtype Classification of Spinal Tumors
    Jiao, Menglei
    Liu, Hong
    Yang, Zekang
    Tian, Shuai
    Ouyang, Hanqiang
    Li, Yuan
    Yuan, Yuan
    Liu, Jianfang
    Wang, Chunjie
    Lang, Ning
    Jiang, Liang
    Yuan, Huishu
    Qian, Yueliang
    Wang, Xiangdong
    [J]. COMPUTATIONAL MATHEMATICS MODELING IN CANCER ANALYSIS, CMMCA 2022, 2022, 13574 : 58 - 67
  • [7] Prediction of Protein Tertiary Structure Using Pre-Trained Self-Supervised Learning Based on Transformer
    Kurniawan, Alif
    Jatmiko, Wisnu
    Hertadi, Rukman
    Habibie, Novian
    [J]. 2020 5TH INTERNATIONAL WORKSHOP ON BIG DATA AND INFORMATION SECURITY (IWBIS 2020), 2020, : 75 - 80
  • [8] CheSS: Chest X-Ray Pre-trained Model via Self-supervised Contrastive Learning
    Kyungjin Cho
    Ki Duk Kim
    Yujin Nam
    Jiheon Jeong
    Jeeyoung Kim
    Changyong Choi
    Soyoung Lee
    Jun Soo Lee
    Seoyeon Woo
    Gil-Sun Hong
    Joon Beom Seo
    Namkug Kim
    [J]. Journal of Digital Imaging, 2023, 36 : 902 - 910
  • [9] CheSS: Chest X-Ray Pre-trained Model via Self-supervised Contrastive Learning
    Cho, Kyungjin
    Kim, Ki Duk
    Nam, Yujin
    Jeong, Jiheon
    Kim, Jeeyoung
    Choi, Changyong
    Lee, Soyoung
    Lee, Jun Soo
    Woo, Seoyeon
    Hong, Gil-Sun
    Seo, Joon Beom
    Kim, Namkug
    [J]. JOURNAL OF DIGITAL IMAGING, 2023, 36 (03) : 902 - 910
  • [10] Adapting Pre-Trained Self-Supervised Learning Model for Speech Recognition with Light-Weight Adapters
    Yue, Xianghu
    Gao, Xiaoxue
    Qian, Xinyuan
    Li, Haizhou
    [J]. ELECTRONICS, 2024, 13 (01)