共 50 条
- [1] Pruning Pre-trained Language ModelsWithout Fine-Tuning [J]. PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 594 - 605
- [2] Span Fine-tuning for Pre-trained Language Models [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 1970 - 1979
- [3] Overcoming Catastrophic Forgetting for Fine-Tuning Pre-trained GANs [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT V, 2023, 14173 : 293 - 308
- [4] Waste Classification by Fine-Tuning Pre-trained CNN and GAN [J]. INTERNATIONAL JOURNAL OF COMPUTER SCIENCE AND NETWORK SECURITY, 2021, 21 (08): : 65 - 70
- [5] Debiasing Pre-Trained Language Models via Efficient Fine-Tuning [J]. PROCEEDINGS OF THE SECOND WORKSHOP ON LANGUAGE TECHNOLOGY FOR EQUALITY, DIVERSITY AND INCLUSION (LTEDI 2022), 2022, : 59 - 69
- [7] Exploiting Syntactic Information to Boost the Fine-tuning of Pre-trained Models [J]. 2022 IEEE 46TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE (COMPSAC 2022), 2022, : 575 - 582
- [8] Improving Fine-tuning Pre-trained Models on Small Source Code Datasets via Variational Information Bottleneck [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE ANALYSIS, EVOLUTION AND REENGINEERING, SANER, 2023, : 331 - 342
- [9] Pathologies of Pre-trained Language Models in Few-shot Fine-tuning [J]. PROCEEDINGS OF THE THIRD WORKSHOP ON INSIGHTS FROM NEGATIVE RESULTS IN NLP (INSIGHTS 2022), 2022, : 144 - 153