共 50 条
- [21] Exploring Few-Shot Fine-Tuning Strategies for Models of Visually Grounded Speech INTERSPEECH 2022, 2022, : 1416 - 1420
- [22] Pruning Pre-trained Language ModelsWithout Fine-Tuning PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 594 - 605
- [23] Fine-Tuning of CLIP in Few-Shot Scenarios via Supervised Contrastive Learning PATTERN RECOGNITION AND COMPUTER VISION, PT III, PRCV 2024, 2025, 15033 : 104 - 117
- [24] Visual semantic alignment network based on pre-trained ViT for few-shot image classification 2024 ASIA PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE, APSIPA ASC, 2024,
- [25] Revisiting k-NN for Fine-Tuning Pre-trained Language Models CHINESE COMPUTATIONAL LINGUISTICS, CCL 2023, 2023, 14232 : 327 - 338
- [27] TOKEN Is a MASK: Few-shot Named Entity Recognition with Pre-trained Language Models TEXT, SPEECH, AND DIALOGUE (TSD 2022), 2022, 13502 : 138 - 150
- [28] Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [29] Fine-Tuning Pre-Trained Language Models Effectively by Optimizing Subnetworks Adaptively ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
- [30] An Empirical Study on Hyperparameter Optimization for Fine-Tuning Pre-trained Language Models 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021), 2021, : 2286 - 2300