共 50 条
- [41] Evaluating and Inducing Personality in Pre-trained Language Models [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [42] Evaluating the Summarization Comprehension of Pre-Trained Language Models [J]. Lobachevskii Journal of Mathematics, 2023, 44 : 3028 - 3039
- [43] Pre-trained models for natural language processing: A survey [J]. Science China Technological Sciences, 2020, 63 : 1872 - 1897
- [44] Robust Lottery Tickets for Pre-trained Language Models [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 2211 - 2224
- [46] Leveraging pre-trained language models for code generation [J]. COMPLEX & INTELLIGENT SYSTEMS, 2024, 10 (03) : 3955 - 3980
- [47] What Makes Pre-trained Language Models Better Zero-shot Learners? [J]. PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 2288 - 2303
- [49] μBERT: Mutation Testing using Pre-Trained Language Models [J]. 2022 IEEE 15TH INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION WORKSHOPS (ICSTW 2022), 2022, : 160 - 169
- [50] In-Context Analogical Reasoning with Pre-Trained Language Models [J]. PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 1953 - 1969