共 50 条
- [31] Compressing Pre-trained Models of Code into 3 MB PROCEEDINGS OF THE 37TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, ASE 2022, 2022,
- [32] An Empirical Comparison of Pre-Trained Models of Source Code 2023 IEEE/ACM 45TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING, ICSE, 2023, : 2136 - 2148
- [33] What Do They Capture? - A Structural Analysis of Pre-Trained Language Models for Source Code 2022 ACM/IEEE 44TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2022), 2022, : 2377 - 2388
- [34] What do pre-trained code models know about code? 2021 36TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING ASE 2021, 2021, : 1332 - 1336
- [35] A Study of Pre-trained Language Models in Natural Language Processing 2020 IEEE INTERNATIONAL CONFERENCE ON SMART CLOUD (SMARTCLOUD 2020), 2020, : 116 - 121
- [36] Diet Code Is Healthy: Simplifying Programs for Pre-trained Models of Code PROCEEDINGS OF THE 30TH ACM JOINT MEETING EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, ESEC/FSE 2022, 2022, : 1073 - 1084
- [37] How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial Robustness? ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
- [38] Entity Resolution Based on Pre-trained Language Models with Two Attentions WEB AND BIG DATA, PT III, APWEB-WAIM 2023, 2024, 14333 : 433 - 448
- [40] A Brief Review of Relation Extraction Based on Pre-Trained Language Models FUZZY SYSTEMS AND DATA MINING VI, 2020, 331 : 775 - 789