共 50 条
- [41] Phased Instruction Fine-Tuning for Large Language Models FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 5735 - 5748
- [42] Improve Performance of Fine-tuning Language Models with Prompting INFOCOMMUNICATIONS JOURNAL, 2023, 15 : 62 - 68
- [43] HackMentor: Fine-Tuning Large Language Models for Cybersecurity 2023 IEEE 22ND INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, BIGDATASE, CSE, EUC, ISCI 2023, 2024, : 452 - 461
- [45] Fine-tuning language models to recognize semantic relations Language Resources and Evaluation, 2023, 57 : 1463 - 1486
- [46] Fine-Tuning Language Models with Just Forward Passes ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [47] AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 6279 - 6299
- [48] COMBINING CONTRASTIVE AND NON-CONTRASTIVE LOSSES FOR FINE-TUNING PRETRAINED MODELS IN SPEECH ANALYSIS 2022 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP, SLT, 2022, : 876 - 883
- [49] IIITT at CASE 2021 Task 1: Leveraging Pretrained Language Models for Multilingual Protest Detection CASE 2021: THE 4TH WORKSHOP ON CHALLENGES AND APPLICATIONS OF AUTOMATED EXTRACTION OF SOCIO-POLITICAL EVENTS FROM TEXT (CASE), 2021, : 98 - 104
- [50] Clinical information extraction for lower-resource languages and domains with few-shot learning using pretrained language models and prompting NATURAL LANGUAGE PROCESSING, 2024,