共 50 条
- [21] Fine-Tuning BERT-Based Pre-Trained Models for Arabic Dependency Parsing [J]. APPLIED SCIENCES-BASEL, 2023, 13 (07):
- [22] Fine-tuning Pre-Trained Transformer Language Models to Distantly Supervised Relation Extraction [J]. 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 1388 - 1398
- [23] Virtual Data Augmentation: A Robust and General Framework for Fine-tuning Pre-trained Models [J]. 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 3875 - 3887
- [24] Sentiment Analysis Using Pre-Trained Language Model With No Fine-Tuning and Less Resource [J]. IEEE ACCESS, 2022, 10 : 107056 - 107065
- [25] Make Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [26] Disfluencies and Fine-Tuning Pre-trained Language Models for Detection of Alzheimer's Disease [J]. INTERSPEECH 2020, 2020, : 2162 - 2166
- [27] Towards Efficient Fine-Tuning of Pre-trained Code Models: An Experimental Study and Beyond [J]. PROCEEDINGS OF THE 32ND ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2023, 2023, : 39 - 51
- [28] HyPe: Better Pre-trained Language Model Fine-tuning with Hidden Representation Perturbation [J]. PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 3246 - 3264