共 50 条
- [1] FedITD: A Federated Parameter-Efficient Tuning With Pre-Trained Large Language Models and Transfer Learning Framework for Insider Threat Detection IEEE ACCESS, 2024, 12 : 160396 - 160417
- [2] REDUCING COMMUNICATION OVERHEAD IN FEDERATED LEARNING FOR PRE-TRAINED LANGUAGE MODELS USING PARAMETER-EFFICIENT FINETUNING CONFERENCE ON LIFELONG LEARNING AGENTS, VOL 232, 2023, 232 : 456 - 469
- [3] An Empirical Analysis of Parameter-Efficient Methods for Debiasing Pre-Trained Language Models PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 15730 - 15745
- [4] ADT: An Additive Delta-Tuning approach for parameter-efficient tuning in pre-trained language models 2024 6TH INTERNATIONAL CONFERENCE ON NATURAL LANGUAGE PROCESSING, ICNLP 2024, 2024, : 382 - 386
- [5] Parameter-efficient fine-tuning of large-scale pre-trained language models Nature Machine Intelligence, 2023, 5 : 220 - 235
- [6] Hadamard Adapter: An Extreme Parameter-Efficient Adapter Tuning Method for Pre-trained Language Models PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 276 - 285
- [8] An Empirical Study of Parameter-Efficient Fine-Tuning Methods for Pre-trained Code Models 2023 38TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, ASE, 2023, : 397 - 408
- [9] Parameter-Efficient Fine-Tuning of Pre-trained Large Language Models for Financial Text Analysis ARTIFICIAL INTELLIGENCE RESEARCH, SACAIR 2024, 2025, 2326 : 3 - 20
- [10] Neural Architecture Search for Parameter-Efficient Fine-tuning of Large Pre-trained Language Models FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 8506 - 8515