共 50 条
- [1] Make Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [2] Debiasing Pre-Trained Language Models via Efficient Fine-Tuning PROCEEDINGS OF THE SECOND WORKSHOP ON LANGUAGE TECHNOLOGY FOR EQUALITY, DIVERSITY AND INCLUSION (LTEDI 2022), 2022, : 59 - 69
- [3] Parameter-efficient fine-tuning of large-scale pre-trained language models Nature Machine Intelligence, 2023, 5 : 220 - 235
- [5] Pruning Pre-trained Language ModelsWithout Fine-Tuning PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 594 - 605
- [6] Span Fine-tuning for Pre-trained Language Models FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 1970 - 1979
- [7] Improving Pre-Trained Weights through Meta-Heuristics Fine-Tuning 2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021), 2021,
- [8] Overcoming Catastrophic Forgetting for Fine-Tuning Pre-trained GANs MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT V, 2023, 14173 : 293 - 308
- [9] Waste Classification by Fine-Tuning Pre-trained CNN and GAN INTERNATIONAL JOURNAL OF COMPUTER SCIENCE AND NETWORK SECURITY, 2021, 21 (08): : 65 - 70
- [10] Towards Efficient Fine-Tuning of Pre-trained Code Models: An Experimental Study and Beyond PROCEEDINGS OF THE 32ND ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2023, 2023, : 39 - 51