Neural Architecture Search for Parameter-Efficient Fine-tuning of Large Pre-trained Language Models

被引:0
|
作者
Lawton, Neal [1 ]
Kumar, Anoop [2 ]
Thattai, Govind [2 ]
Galstyan, Aram [2 ]
Ver Steeg, Greg [2 ]
机构
[1] Informat Sci Inst, Marina Del Rey, CA 90292 USA
[2] Amazon Alexa AI, Redmond, WA USA
关键词
D O I
暂无
中图分类号
学科分类号
摘要
Parameter-efficient tuning (PET) methods fit pre-trained language models (PLMs) to downstream tasks by either computing a small compressed update for a subset of model parameters, or appending and fine-tuning a small number of new model parameters to the pre-trained network. Hand-designed PET architectures from the literature perform well in practice, but have the potential to be improved via automated neural architecture search (NAS). We propose an efficient NAS method for learning PET architectures via structured and unstructured pruning. We present experiments on GLUE demonstrating the effectiveness of our algorithm and discuss how PET architectural design choices affect performance in practice.
引用
收藏
页码:8506 / 8515
页数:10
相关论文
共 50 条
  • [1] Parameter-efficient fine-tuning of large-scale pre-trained language models
    Ning Ding
    Yujia Qin
    Guang Yang
    Fuchao Wei
    Zonghan Yang
    Yusheng Su
    Shengding Hu
    Yulin Chen
    Chi-Min Chan
    Weize Chen
    Jing Yi
    Weilin Zhao
    Xiaozhi Wang
    Zhiyuan Liu
    Hai-Tao Zheng
    Jianfei Chen
    Yang Liu
    Jie Tang
    Juanzi Li
    Maosong Sun
    Nature Machine Intelligence, 2023, 5 : 220 - 235
  • [2] Parameter-efficient fine-tuning of large-scale pre-trained language models
    Ding, Ning
    Qin, Yujia
    Yang, Guang
    Wei, Fuchao
    Yang, Zonghan
    Su, Yusheng
    Hu, Shengding
    Chen, Yulin
    Chan, Chi-Min
    Chen, Weize
    Yi, Jing
    Zhao, Weilin
    Wang, Xiaozhi
    Liu, Zhiyuan
    Zheng, Hai-Tao
    Chen, Jianfei
    Liu, Yang
    Tang, Jie
    Li, Juanzi
    Sun, Maosong
    NATURE MACHINE INTELLIGENCE, 2023, 5 (03) : 220 - +
  • [3] Parameter-Efficient Fine-Tuning of Pre-trained Large Language Models for Financial Text Analysis
    Langa, Kelly
    Wang, Hairong
    Okuboyejo, Olaperi
    ARTIFICIAL INTELLIGENCE RESEARCH, SACAIR 2024, 2025, 2326 : 3 - 20
  • [4] An Empirical Study of Parameter-Efficient Fine-Tuning Methods for Pre-trained Code Models
    Liu, Jiaxing
    Sha, Chaofeng
    Peng, Xin
    2023 38TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, ASE, 2023, : 397 - 408
  • [5] Point-PEFT: Parameter-Efficient Fine-Tuning for 3D Pre-trained Models
    Tang, Yiwen
    Zhang, Ray
    Guo, Zoey
    Ma, Xianzheng
    Zhao, Bin
    Wang, Zhigang
    Wang, Dong
    Li, Xuelong
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 6, 2024, : 5171 - 5179
  • [6] Parameter-efficient fine-tuning of pre-trained code models for just-in-time defect prediction
    Abu Talib M.
    Bou Nassif A.
    Azzeh M.
    Alesh Y.
    Afadar Y.
    Neural Computing and Applications, 36 (27) : 16911 - 16940
  • [7] Debiasing Pre-Trained Language Models via Efficient Fine-Tuning
    Gira, Michael
    Zhang, Ruisu
    Lee, Kangwook
    PROCEEDINGS OF THE SECOND WORKSHOP ON LANGUAGE TECHNOLOGY FOR EQUALITY, DIVERSITY AND INCLUSION (LTEDI 2022), 2022, : 59 - 69
  • [8] Span Fine-tuning for Pre-trained Language Models
    Bao, Rongzhou
    Zhang, Zhuosheng
    Zhao, Hai
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 1970 - 1979
  • [9] Bridging pre-trained models to continual learning: A hypernetwork based framework with parameter-efficient fine-tuning techniques
    Ding, Fengqian
    Xu, Chen
    Liu, Han
    Zhou, Bin
    Zhou, Hongchao
    INFORMATION SCIENCES, 2024, 674
  • [10] Parameter-efficient fine-tuning of large language models using semantic knowledge tuning
    Prottasha, Nusrat Jahan
    Mahmud, Asif
    Sobuj, Md. Shohanur Islam
    Bhat, Prakash
    Kowsher, Md
    Yousefi, Niloofar
    Garibay, Ozlem Ozmen
    SCIENTIFIC REPORTS, 2024, 14 (01):