共 50 条
- [44] GRAM: Fast Fine-tuning of Pre-trained Language Models for Content-based Collaborative Filtering [J]. NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 839 - 851
- [45] GamMa: Efficient Fine-Tuning of Pre-Trained Language Models Using Gradient Activation Mapping Masking [J]. 2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
- [47] On the Interplay Between Fine-tuning and Sentence-level Probing for Linguistic Knowledge in Pre-trained Transformers [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 2502 - 2516
- [48] Is Fine-tuning Needed? Pre-trained Language Models Are Near Perfect for Out-of-Domain Detection [J]. PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 12813 - 12832
- [49] Poster: Attempts on detecting Alzheimer's disease by fine-tuning pre-trained model with Gaze Data [J]. PROCEEDINGS OF THE 2024 ACM SYMPOSIUM ON EYE TRACKING RESEARCH & APPLICATIONS, ETRA 2024, 2024,
- [50] Enhancing Machine-Generated Text Detection: Adversarial Fine-Tuning of Pre-Trained Language Models [J]. IEEE ACCESS, 2024, 12 : 65333 - 65340