共 50 条
- [1] Span Fine-tuning for Pre-trained Language Models [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 1970 - 1979
- [2] Debiasing Pre-Trained Language Models via Efficient Fine-Tuning [J]. PROCEEDINGS OF THE SECOND WORKSHOP ON LANGUAGE TECHNOLOGY FOR EQUALITY, DIVERSITY AND INCLUSION (LTEDI 2022), 2022, : 59 - 69
- [3] Pathologies of Pre-trained Language Models in Few-shot Fine-tuning [J]. PROCEEDINGS OF THE THIRD WORKSHOP ON INSIGHTS FROM NEGATIVE RESULTS IN NLP (INSIGHTS 2022), 2022, : 144 - 153
- [4] Revisiting k-NN for Fine-Tuning Pre-trained Language Models [J]. CHINESE COMPUTATIONAL LINGUISTICS, CCL 2023, 2023, 14232 : 327 - 338
- [5] Fine-Tuning Pre-Trained Language Models Effectively by Optimizing Subnetworks Adaptively [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
- [6] An Empirical Study on Hyperparameter Optimization for Fine-Tuning Pre-trained Language Models [J]. 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021), 2021, : 2286 - 2300
- [7] Towards Fine-tuning Pre-trained Language Models with Integer Forward and Backward Propagation [J]. 17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 1912 - 1921
- [8] Fine-tuning Pre-Trained Transformer Language Models to Distantly Supervised Relation Extraction [J]. 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 1388 - 1398
- [9] Sentiment Analysis Using Pre-Trained Language Model With No Fine-Tuning and Less Resource [J]. IEEE ACCESS, 2022, 10 : 107056 - 107065
- [10] Disfluencies and Fine-Tuning Pre-trained Language Models for Detection of Alzheimer's Disease [J]. INTERSPEECH 2020, 2020, : 2162 - 2166