共 50 条
- [21] TOKEN Is a MASK: Few-shot Named Entity Recognition with Pre-trained Language Models TEXT, SPEECH, AND DIALOGUE (TSD 2022), 2022, 13502 : 138 - 150
- [22] Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [23] Pre-trained Vision and Language Transformers Are Few-Shot Incremental Learners 2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 23881 - 23890
- [24] Better Few-Shot Text Classification with Pre-trained Language Model ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT II, 2021, 12892 : 537 - 548
- [26] Is Fine-tuning Needed? Pre-trained Language Models Are Near Perfect for Out-of-Domain Detection PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 12813 - 12832
- [27] Towards Fine-tuning Pre-trained Language Models with Integer Forward and Backward Propagation 17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 1912 - 1921
- [28] Enhancing Machine-Generated Text Detection: Adversarial Fine-Tuning of Pre-Trained Language Models IEEE ACCESS, 2024, 12 : 65333 - 65340
- [29] Efficient Fine-Tuning for Low-Resource Tibetan Pre-trained Language Models ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT VII, 2024, 15022 : 410 - 422
- [30] SPDF: Sparse Pre-training and Dense Fine-tuning for Large Language Models UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2023, 216 : 2134 - 2146