共 50 条
- [41] Memorisation versus Generalisation in Pre-trained Language Models [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 7564 - 7578
- [44] Understanding Online Attitudes with Pre-Trained Language Models [J]. PROCEEDINGS OF THE 2023 IEEE/ACM INTERNATIONAL CONFERENCE ON ADVANCES IN SOCIAL NETWORKS ANALYSIS AND MINING, ASONAM 2023, 2023, : 745 - 752
- [45] Compressing Pre-trained Language Models by Matrix Decomposition [J]. 1ST CONFERENCE OF THE ASIA-PACIFIC CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 10TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (AACL-IJCNLP 2020), 2020, : 884 - 889
- [46] On the Sentence Embeddings from Pre-trained Language Models [J]. PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 9119 - 9130
- [47] Pre-trained language models for keyphrase prediction: A review [J]. ICT EXPRESS, 2024, 10 (04): : 871 - 890
- [49] Evaluating and Inducing Personality in Pre-trained Language Models [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [50] Evaluating the Summarization Comprehension of Pre-Trained Language Models [J]. Lobachevskii Journal of Mathematics, 2023, 44 : 3028 - 3039