共 50 条
- [21] Exploiting Syntactic Information to Boost the Fine-tuning of Pre-trained Models 2022 IEEE 46TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE (COMPSAC 2022), 2022, : 575 - 582
- [22] Gender-tuning: Empowering Fine-tuning for Debiasing Pre-trained Language Models FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 5448 - 5458
- [24] Pathologies of Pre-trained Language Models in Few-shot Fine-tuning PROCEEDINGS OF THE THIRD WORKSHOP ON INSIGHTS FROM NEGATIVE RESULTS IN NLP (INSIGHTS 2022), 2022, : 144 - 153
- [25] Revisiting k-NN for Fine-Tuning Pre-trained Language Models CHINESE COMPUTATIONAL LINGUISTICS, CCL 2023, 2023, 14232 : 327 - 338
- [27] Detecting the Stages of Alzheimer’s Disease with Pre-trained Deep Learning Architectures Arabian Journal for Science and Engineering, 2022, 47 : 2201 - 2218
- [29] Improving Pre-Trained Weights through Meta-Heuristics Fine-Tuning 2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021), 2021,
- [30] Fine-Tuning Pre-Trained Language Models Effectively by Optimizing Subnetworks Adaptively ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,