共 50 条
- [1] Parameter-efficient fine-tuning of large-scale pre-trained language models Nature Machine Intelligence, 2023, 5 : 220 - 235
- [3] Debiasing Pre-Trained Language Models via Efficient Fine-Tuning PROCEEDINGS OF THE SECOND WORKSHOP ON LANGUAGE TECHNOLOGY FOR EQUALITY, DIVERSITY AND INCLUSION (LTEDI 2022), 2022, : 59 - 69
- [6] Bi-tuning: Efficient Transfer from Pre-trained Models MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT V, 2023, 14173 : 357 - 373
- [7] Y\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\cal{Y}$$\end{document}-Tuning: an efficient tuning paradigm for large-scale pre-trained models via label representation learning Frontiers of Computer Science, 2024, 18 (4)
- [8] Prompt Tuning for Discriminative Pre-trained Language Models FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 3468 - 3473
- [9] Tuning Pre-trained Model via Moment Probing 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 11769 - 11779