共 50 条
- [1] An Empirical Study of Parameter-Efficient Fine-Tuning Methods for Pre-trained Code Models 2023 38TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, ASE, 2023, : 397 - 408
- [2] Debiasing Pre-Trained Language Models via Efficient Fine-Tuning PROCEEDINGS OF THE SECOND WORKSHOP ON LANGUAGE TECHNOLOGY FOR EQUALITY, DIVERSITY AND INCLUSION (LTEDI 2022), 2022, : 59 - 69
- [3] Span Fine-tuning for Pre-trained Language Models FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 1970 - 1979
- [6] Towards Fine-tuning Pre-trained Language Models with Integer Forward and Backward Propagation 17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 1912 - 1921
- [7] An Empirical Study on Hyperparameter Optimization for Fine-Tuning Pre-trained Language Models 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021), 2021, : 2286 - 2300
- [8] Exploiting Syntactic Information to Boost the Fine-tuning of Pre-trained Models 2022 IEEE 46TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE (COMPSAC 2022), 2022, : 575 - 582
- [9] Parameter-efficient fine-tuning of large-scale pre-trained language models Nature Machine Intelligence, 2023, 5 : 220 - 235