共 50 条
- [41] Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples 45TH IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP 2024, 2024, : 3015 - 3033
- [42] Variational Monte Carlo on a Budget - Fine-tuning pre-trained NeuralWavefunctions ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [44] Fine-tuning Pre-trained Language Models for Few-shot Intent Detection: Supervised Pre-training and Isotropization NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 532 - 542
- [45] An Empirical Study of Parameter-Efficient Fine-Tuning Methods for Pre-trained Code Models 2023 38TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, ASE, 2023, : 397 - 408
- [47] Few-Sample Named Entity Recognition for Security Vulnerability Reports by Fine-Tuning Pre-trained Language Models DEPLOYABLE MACHINE LEARNING FOR SECURITY DEFENSE, MLHAT 2021, 2021, 1482 : 55 - 78
- [48] Enhancing Automated Essay Scoring Performance via Fine-tuning Pre-trained Language Models with Combination of Regression and Ranking FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 1560 - 1569
- [49] Enhancing Health Mention Classification Through Reexamining Misclassified Samples and Robust Fine-Tuning Pre-Trained Language Models IEEE ACCESS, 2024, 12 : 190445 - 190453
- [50] Fine-tuning Happens in Tiny Subspaces: Exploring Intrinsic Task-specific Subspaces of Pre-trained Language Models PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 1701 - 1713