共 50 条
- [2] Pruning Pre-trained Language ModelsWithout Fine-Tuning [J]. PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 594 - 605
- [3] Span Fine-tuning for Pre-trained Language Models [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 1970 - 1979
- [4] Overcoming Catastrophic Forgetting for Fine-Tuning Pre-trained GANs [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT V, 2023, 14173 : 293 - 308
- [8] Variational Monte Carlo on a Budget - Fine-tuning pre-trained NeuralWavefunctions [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [9] Debiasing Pre-Trained Language Models via Efficient Fine-Tuning [J]. PROCEEDINGS OF THE SECOND WORKSHOP ON LANGUAGE TECHNOLOGY FOR EQUALITY, DIVERSITY AND INCLUSION (LTEDI 2022), 2022, : 59 - 69