共 50 条
- [1] Few-Shot NLG with Pre-Trained Language Model [J]. 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 183 - 190
- [2] PPT: Pre-trained Prompt Tuning for Few-shot Learning [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 8410 - 8423
- [3] Better Few-Shot Text Classification with Pre-trained Language Model [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT II, 2021, 12892 : 537 - 548
- [4] FD-Align: Feature Discrimination Alignment for Fine-tuning Pre-Trained Models in Few-Shot Learning [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [5] Making Pre-trained Language Models Better Few-shot Learners [J]. 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2021), VOL 1, 2021, : 3816 - 3830
- [6] Investigating Prompt Learning for Chinese Few-Shot Text Classification with Pre-Trained Language Models [J]. APPLIED SCIENCES-BASEL, 2022, 12 (21):
- [7] Few-shot Image Classification: Just Use a Library of Pre-trained Feature Extractors and a Simple Classifier [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9425 - 9434
- [8] Pathologies of Pre-trained Language Models in Few-shot Fine-tuning [J]. PROCEEDINGS OF THE THIRD WORKSHOP ON INSIGHTS FROM NEGATIVE RESULTS IN NLP (INSIGHTS 2022), 2022, : 144 - 153
- [9] Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [10] TOKEN Is a MASK: Few-shot Named Entity Recognition with Pre-trained Language Models [J]. TEXT, SPEECH, AND DIALOGUE (TSD 2022), 2022, 13502 : 138 - 150