共 50 条
- [1] MEFT: Memory-Efficient Fine-Tuning through Sparse Adapter PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 2375 - 2388
- [2] Quantized Side Tuning: Fast and Memory-Efficient Tuning of Quantized Large Language Models PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 1 - 17
- [3] Memory-Efficient Fine-Tuning of Compressed Large Language Models via sub-4-bit Integer Quantization ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [4] Make Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [5] Efficient Index Learning via Model Reuse and Fine-tuning 2023 IEEE 39TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING WORKSHOPS, ICDEW, 2023, : 60 - 66
- [6] Fine-tuning Pipeline for Hand Image Generation Using Diffusion Model 2024 NICOGRAPH INTERNATIONAL, NICOINT 2024, 2024, : 58 - 63
- [7] Towards Adaptive Prefix Tuning for Parameter-Efficient Language Model Fine-tuning 61ST CONFERENCE OF THE THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 2, 2023, : 1239 - 1248
- [8] MultiFiT: Efficient Multi-lingual Language Model Fine-tuning 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 5702 - 5707
- [9] On the Effectiveness of Parameter-Efficient Fine-Tuning THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 11, 2023, : 12799 - 12807
- [10] PockEngine: Sparse and Efficient Fine-tuning in a Pocket 56TH IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE, MICRO 2023, 2023, : 1381 - 1394