共 50 条
- [1] Making Pre-trained Language Models Better Few-shot Learners [J]. 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2021), VOL 1, 2021, : 3816 - 3830
- [2] Few-Shot NLG with Pre-Trained Language Model [J]. 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 183 - 190
- [3] Pathologies of Pre-trained Language Models in Few-shot Fine-tuning [J]. PROCEEDINGS OF THE THIRD WORKSHOP ON INSIGHTS FROM NEGATIVE RESULTS IN NLP (INSIGHTS 2022), 2022, : 144 - 153
- [4] Aliasing Backdoor Attacks on Pre-trained Models [J]. PROCEEDINGS OF THE 32ND USENIX SECURITY SYMPOSIUM, 2023, : 2707 - 2724
- [5] TOKEN Is a MASK: Few-shot Named Entity Recognition with Pre-trained Language Models [J]. TEXT, SPEECH, AND DIALOGUE (TSD 2022), 2022, 13502 : 138 - 150
- [7] Better Few-Shot Text Classification with Pre-trained Language Model [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT II, 2021, 12892 : 537 - 548
- [8] Investigating Prompt Learning for Chinese Few-Shot Text Classification with Pre-Trained Language Models [J]. APPLIED SCIENCES-BASEL, 2022, 12 (21):
- [9] Language Models are Few-Shot Learners [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33