共 50 条
- [1] Aliasing Backdoor Attacks on Pre-trained Models PROCEEDINGS OF THE 32ND USENIX SECURITY SYMPOSIUM, 2023, : 2707 - 2724
- [2] BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning 43RD IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP 2022), 2022, : 2043 - 2059
- [3] Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 3023 - 3032
- [4] Backdoor Pre-trained Models Can Transfer to All CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 3141 - 3158
- [6] Towards Inadequately Pre-trained Models in Transfer Learning 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 19340 - 19351
- [8] Multi-target Backdoor Attacks for Code Pre-trained Models PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 7236 - 7254
- [9] Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [10] Meta Distant Transfer Learning for Pre-trained Language Models 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 9742 - 9752