共 50 条
- [1] Fine-tuning Pre-Trained Transformer Language Models to Distantly Supervised Relation Extraction [J]. 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 1388 - 1398
- [3] Improving Pre-Trained Weights through Meta-Heuristics Fine-Tuning [J]. 2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021), 2021,
- [4] Sentiment Analysis Using Pre-Trained Language Model With No Fine-Tuning and Less Resource [J]. IEEE ACCESS, 2022, 10 : 107056 - 107065
- [5] Virtual Data Augmentation: A Robust and General Framework for Fine-tuning Pre-trained Models [J]. 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 3875 - 3887
- [6] Fine-Tuning Pre-Trained Model to Extract Undesired Behaviors from App Reviews [J]. 2022 IEEE 22ND INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY, QRS, 2022, : 1125 - 1134
- [7] Make Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [8] HyPe: Better Pre-trained Language Model Fine-tuning with Hidden Representation Perturbation [J]. PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 3246 - 3264
- [10] Monkeypox Virus Detection Using Pre-trained Deep Learning-based Approaches [J]. Journal of Medical Systems, 46