IPES: Improved Pre-trained Encoder Stealing Attack in Contrastive Learning

被引:0
|
作者
Zhang, Chuan [1 ]
Li, Zhuopeng [1 ]
Liang, Haotian [1 ]
Liang, Jinwen [2 ]
Liu, Ximeng [3 ]
Zhu, Liehuang [1 ]
机构
[1] Beijing Inst Technol, Sch Cyberspace Sci & Technol, Beijing, Peoples R China
[2] Hong Kong Polytechn Univ, Dept Comp, Hong Kong, Peoples R China
[3] Fuzhou Univ, Coll Comp & Data Sci, Fuzhou, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
D O I
10.1109/iThings-GreenCom-CPSCom-SmartData-Cybermatics60724.2023.00078
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent studies have shed light on security vulnerabilities in Encoder-as-a-Service (EaaS) systems that enable the theft of valuable encoder attributes such as functionality. However, many of these attacks often either simply used the data augmentation method, or solely explored the idea of contrastive learning to improve the performance, lacking analysis and a combination of both two aspects. Furthermore, they also ignored the potential of harnessing the inner characteristics of the encoder, specifically its robustness. Thus, we introduce Improved Pretrained Encoder Stealing (IPES), a novel approach that capitalizes on augmented and perturbed samples to enhance the surrogate encoder's ability to replicate the aim encoder. Additionally, we place emphasis on optimizing the query budget by leveraging the inherent robustness of well-trained encoders. By combining the idea of contrastive learning and the inherent robustness of the encoder, IPES improves the performance by more than 14% in downstream accuracy compared to conventional methods.
引用
收藏
页码:354 / 361
页数:8
相关论文
共 50 条
  • [1] Manipulating Pre-Trained Encoder for Targeted Poisoning Attacks in Contrastive Learning
    Chen, Jian
    Gao, Yuan
    Liu, Gaoyang
    Abdelmoniem, Ahmed M.
    Wang, Chen
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 2412 - 2424
  • [2] PtbStolen: Pre-trained Encoder Stealing Through Perturbed Samples
    Zhang, Chuan
    Liang, Haotian
    Li, Zhuopeng
    Wu, Tong
    Wang, Licheng
    Zhu, Liehuang
    [J]. EMERGING INFORMATION SECURITY AND APPLICATIONS, EISA 2023, 2024, 2004 : 1 - 19
  • [3] AWEncoder: Adversarial Watermarking Pre-Trained Encoders in Contrastive Learning
    Zhang, Tianxing
    Wu, Hanzhou
    Lu, Xiaofeng
    Han, Gengle
    Sun, Guangling
    [J]. APPLIED SCIENCES-BASEL, 2023, 13 (06):
  • [4] Pre-trained Online Contrastive Learning for Insurance Fraud Detection
    Zhang, Rui
    Cheng, Dawei
    Yang, Jie
    Ouyang, Yi
    Wu, Xian
    Zheng, Yefeng
    Jiang, Changjun
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 20, 2024, : 22511 - 22519
  • [5] Learning to Summarize Chinese Radiology Findings With a Pre-Trained Encoder
    Jiang, Zuowei
    Cai, Xiaoyan
    Yang, Libin
    Gao, Dehong
    Zhao, Wei
    Han, Junwei
    Liu, Jun
    Shen, Dinggang
    Liu, Tianming
    [J]. IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2023, 70 (12) : 3277 - 3287
  • [6] Syntax-guided Contrastive Learning for Pre-trained Language Model
    Zhang, Shuai
    Wang, Lijie
    Xiao, Xinyan
    Wu, Hua
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 2430 - 2440
  • [7] Clinical diagnosis normalization based on contrastive learning and pre-trained model
    Liu, Ying
    Cui, Bingjian
    Cao, Liu
    Cheng, Longlong
    [J]. Huazhong Keji Daxue Xuebao (Ziran Kexue Ban)/Journal of Huazhong University of Science and Technology (Natural Science Edition), 2024, 52 (05): : 23 - 28
  • [8] ContraBERT: Enhancing Code Pre-trained Models via Contrastive Learning
    Liu, Shangqing
    Wu, Bozhi
    Xie, Xiaofei
    Meng, Guozhu
    Liu, Yang
    [J]. 2023 IEEE/ACM 45TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING, ICSE, 2023, : 2476 - 2487
  • [9] EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning
    Liu, Hongbin
    Jia, Jinyuan
    Qu, Wenjie
    Gong, Neil Zhenqiang
    [J]. CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 2081 - 2095
  • [10] Adder Encoder for Pre-trained Language Model
    Ding, Jianbang
    Zhang, Suiyun
    Li, Linlin
    [J]. CHINESE COMPUTATIONAL LINGUISTICS, CCL 2023, 2023, 14232 : 339 - 347