FedPETuning: When Federated Learning Meets the Parameter-Efficient Tuning Methods of Pre-trained Language Models

被引:0
|
作者
Zhang, Zhuo [1 ,2 ]
Yang, Yuanhang [1 ]
Dai, Yong [4 ]
Wang, Qifan [5 ]
Yu, Yue [2 ]
Que, Lizhen [3 ]
Xu, Zenglin [1 ,2 ]
机构
[1] Harbin Inst Technol, Shenzhen, Peoples R China
[2] Peng Cheng Lab, Shenzhen, Peoples R China
[3] Monash Univ, Melbourne, Vic, Australia
[4] Tencent, Shenzhen, Peoples R China
[5] Meta AI, Burlingame, CA USA
关键词
D O I
暂无
中图分类号
学科分类号
摘要
With increasing concerns about data privacy, there is an increasing necessity of fine-tuning pre-trained language models (PLMs) for adapting to downstream tasks located in end-user devices or local clients without transmitting data to the central server. This urgent necessity therefore calls the research of investigating federated learning (FL) for PLMs. However, large PLMs bring the curse of prohibitive communication overhead and local model adaptation costs for the FL system. To this end, we investigate the parameter-efficient tuning (PETuning) of PLMs and develop a corresponding federated benchmark for four representative PETuning methods, dubbed FedPETuning. Specifically, FedPETuning provides the first holistic empirical study of representative PLMs tuning methods in FL, covering privacy attacks, performance comparisons, and resource-constrained analysis. Intensive experimental results have indicated that FedPETuning can efficiently defend against privacy attacks and maintains acceptable performance with reducing heavy resource consumption. The open-source code and data are available at https://github.com/SMILELab-FL/FedPETuning.
引用
收藏
页码:9963 / 9977
页数:15
相关论文
共 50 条
  • [31] DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models
    Chen, Xuxi
    Chen, Tianlong
    Chen, Weizhu
    Awadallah, Ahmed Hassan
    Wang, Zhangyang
    Cheng, Yu
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 8208 - 8222
  • [32] Span Fine-tuning for Pre-trained Language Models
    Bao, Rongzhou
    Zhang, Zhuosheng
    Zhao, Hai
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 1970 - 1979
  • [33] AutoTinyBERT: Automatic Hyper-parameter Optimization for Efficient Pre-trained Language Models
    Yin, Yichun
    Chen, Cheng
    Shang, Lifeng
    Jiang, Xin
    Chen, Xiao
    Liu, Qun
    59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2021), VOL 1, 2021, : 5146 - 5157
  • [34] Efficient Federated Learning with Pre-Trained Large Language Model Using Several Adapter Mechanisms
    Kim, Gyunyeop
    Yoo, Joon
    Kang, Sangwoo
    MATHEMATICS, 2023, 11 (21)
  • [35] Efficient Data Learning for Open Information Extraction with Pre-trained Language Models
    Fan, Zhiyuan
    He, Shizhu
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 13056 - 13063
  • [36] Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models
    Tang, Longxiang
    Tian, Zhuotao
    Li, Kai
    He, Chunming
    Zhou, Hantao
    Zhao, Hengshuang
    Li, Xiu
    Jia, Jiaya
    COMPUTER VISION - ECCV 2024, PT XXXVI, 2025, 15094 : 346 - 365
  • [37] A Parameter-Efficient Learning Approach to Arabic Dialect Identification with Pre-Trained General-Purpose Speech Model
    Radhakrishnan, Srijith
    Yang, Chao-Han Huck
    Khan, Sumeer Ahmad
    Kiani, Narsis A.
    Gomez-Cabrero, David
    Tegner, Jesper N.
    INTERSPEECH 2023, 2023, : 1958 - 1962
  • [38] Efficient Fine-Tuning for Low-Resource Tibetan Pre-trained Language Models
    Zhou, Mingjun
    Daiqing, Zhuoma
    Qun, Nuo
    Nyima, Tashi
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT VII, 2024, 15022 : 410 - 422
  • [39] Federated Learning of Models Pre-Trained on Different Features with Consensus Graphs
    Ma, Tengfei
    Hoang, Trong Nghia
    Chen, Jie
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2023, 216 : 1336 - 1346
  • [40] Democratizing protein language models with parameter-efficient fine-tuning
    Sledzieski, Samuel
    Kshirsagar, Meghana
    Baek, Minkyung
    Dodhia, Rahul
    Ferres, Juan Lavista
    Berger, Bonnie
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2024, 121 (26)