共 50 条
- [1] Are Pre-trained Convolutions Better than Pre-trained Transformers? 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2021), VOL 1, 2021, : 4349 - 4359
- [2] Calibration of Pre-trained Transformers PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 295 - 302
- [3] Emergent Modularity in Pre-trained Transformers FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 4066 - 4083
- [6] How Different are Pre-trained Transformers for Text Ranking? ADVANCES IN INFORMATION RETRIEVAL, PT II, 2022, 13186 : 207 - 214
- [8] Predicting Terms in IS-A Relations with Pre-trained Transformers 13TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING AND THE 3RD CONFERENCE OF THE ASIA-PACIFIC CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, IJCNLP-AACL 2023, 2023, : 134 - 148
- [9] Generative pre-trained transformers (GPT) for surface engineering SURFACE & COATINGS TECHNOLOGY, 2023, 466
- [10] Generating Extended and Multilingual Summaries with Pre-trained Transformers LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 1640 - 1650