共 50 条
- [32] Backdoor Pre-trained Models Can Transfer to All [J]. CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 3141 - 3158
- [33] Prompt Tuning for Discriminative Pre-trained Language Models [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 3468 - 3473
- [34] How to train your pre-trained GAN models [J]. APPLIED INTELLIGENCE, 2023, 53 (22) : 27001 - 27026
- [35] Dynamic Knowledge Distillation for Pre-trained Language Models [J]. 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 379 - 389
- [36] Impact of Morphological Segmentation on Pre-trained Language Models [J]. INTELLIGENT SYSTEMS, PT II, 2022, 13654 : 402 - 416
- [37] TED TALK TEASER GENERATION WITH PRE-TRAINED MODELS [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 8067 - 8071
- [39] Leveraging Pre-trained Language Models for Gender Debiasing [J]. LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 2188 - 2195