共 50 条
- [31] TwitterBERT: Framework for Twitter Sentiment Analysis Based on Pre-trained Language Model Representations [J]. EMERGING TRENDS IN INTELLIGENT COMPUTING AND INFORMATICS: DATA SCIENCE, INTELLIGENT INFORMATION SYSTEMS AND SMART COMPUTING, 2020, 1073 : 428 - 437
- [32] Automated LOINC Standardization Using Pre-trained Large Language Models [J]. MACHINE LEARNING FOR HEALTH, VOL 193, 2022, 193 : 343 - 355
- [33] Labeling Explicit Discourse Relations Using Pre-trained Language Models [J]. TEXT, SPEECH, AND DIALOGUE (TSD 2020), 2020, 12284 : 79 - 86
- [34] Controlling Translation Formality Using Pre-trained Multilingual Language Models [J]. PROCEEDINGS OF THE 19TH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE TRANSLATION (IWSLT 2022), 2022, : 327 - 340
- [35] Repairing Security Vulnerabilities Using Pre-trained Programming Language Models [J]. 52ND ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS WORKSHOP VOLUME (DSN-W 2022), 2022, : 111 - 116
- [36] A Study of Pre-trained Language Models in Natural Language Processing [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON SMART CLOUD (SMARTCLOUD 2020), 2020, : 116 - 121
- [37] Enhancing Language Generation with Effective Checkpoints of Pre-trained Language Model [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 2686 - 2694
- [38] From Cloze to Comprehension: Retrofitting Pre-trained Masked Language Models to Pre-trained Machine Reader [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [39] Probing Pre-Trained Language Models for Disease Knowledge [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 3023 - 3033