共 50 条
- [41] Dynamic Knowledge Distillation for Pre-trained Language Models 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 379 - 389
- [42] Prompt Tuning for Discriminative Pre-trained Language Models FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 3468 - 3473
- [43] A Close Look into the Calibration of Pre-trained Language Models PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 1343 - 1367
- [44] Deep Entity Matching with Pre-Trained Language Models PROCEEDINGS OF THE VLDB ENDOWMENT, 2020, 14 (01): : 50 - 60
- [46] Leveraging Pre-trained Language Models for Gender Debiasing LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 2188 - 2195
- [48] Exploring Robust Overfitting for Pre-trained Language Models FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 5506 - 5522
- [49] Self-conditioning Pre-Trained Language Models INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
- [50] Commonsense Knowledge Transfer for Pre-trained Language Models FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 5946 - 5960