共 50 条
- [2] PtbStolen: Pre-trained Encoder Stealing Through Perturbed Samples [J]. EMERGING INFORMATION SECURITY AND APPLICATIONS, EISA 2023, 2024, 2004 : 1 - 19
- [3] AWEncoder: Adversarial Watermarking Pre-Trained Encoders in Contrastive Learning [J]. APPLIED SCIENCES-BASEL, 2023, 13 (06):
- [4] Pre-trained Online Contrastive Learning for Insurance Fraud Detection [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 20, 2024, : 22511 - 22519
- [6] Syntax-guided Contrastive Learning for Pre-trained Language Model [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 2430 - 2440
- [7] Clinical diagnosis normalization based on contrastive learning and pre-trained model [J]. Huazhong Keji Daxue Xuebao (Ziran Kexue Ban)/Journal of Huazhong University of Science and Technology (Natural Science Edition), 2024, 52 (05): : 23 - 28
- [8] ContraBERT: Enhancing Code Pre-trained Models via Contrastive Learning [J]. 2023 IEEE/ACM 45TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING, ICSE, 2023, : 2476 - 2487
- [9] EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning [J]. CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 2081 - 2095
- [10] Adder Encoder for Pre-trained Language Model [J]. CHINESE COMPUTATIONAL LINGUISTICS, CCL 2023, 2023, 14232 : 339 - 347