共 50 条
- [1] Improving Math Word Problems with Pre-trained Knowledge and Hierarchical Reasoning 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 3384 - 3394
- [2] Are Pre-trained Convolutions Better than Pre-trained Transformers? 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2021), VOL 1, 2021, : 4349 - 4359
- [3] Calibration of Pre-trained Transformers PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 295 - 302
- [4] Classifying Math Knowledge Components via Task-Adaptive Pre-Trained BERT ARTIFICIAL INTELLIGENCE IN EDUCATION (AIED 2021), PT I, 2021, 12748 : 408 - 419
- [5] Emergent Modularity in Pre-trained Transformers FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 4066 - 4083
- [7] Face Inpainting with Pre-trained Image Transformers 2022 30TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE, SIU, 2022,
- [8] Can LLMs Facilitate Interpretation of Pre-trained Language Models? 2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 3248 - 3268
- [9] How Different are Pre-trained Transformers for Text Ranking? ADVANCES IN INFORMATION RETRIEVAL, PT II, 2022, 13186 : 207 - 214