共 50 条
- [1] Evaluating the Summarization Comprehension of Pre-Trained Language Models [J]. Lobachevskii Journal of Mathematics, 2023, 44 : 3028 - 3039
- [2] Evaluating Commonsense in Pre-Trained Language Models [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 9733 - 9740
- [3] Modeling Content Importance for Summarization with Pre-trained Language Models [J]. PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 3606 - 3611
- [4] From Cloze to Comprehension: Retrofitting Pre-trained Masked Language Models to Pre-trained Machine Reader [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [5] Evaluating and Inducing Personality in Pre-trained Language Models [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [7] Pre-trained language models evaluating themselves - A comparative study [J]. PROCEEDINGS OF THE THIRD WORKSHOP ON INSIGHTS FROM NEGATIVE RESULTS IN NLP (INSIGHTS 2022), 2022, : 180 - 187
- [9] Using Pre-Trained Language Models for Abstractive DBPEDIA Summarization: A Comparative Study [J]. KNOWLEDGE GRAPHS: SEMANTICS, MACHINE LEARNING, AND LANGUAGES, 2023, 56 : 19 - 37
- [10] Somun: entity-centric summarization incorporating pre-trained language models [J]. NEURAL COMPUTING & APPLICATIONS, 2021, 33 (10): : 5301 - 5311