FedPETuning: When Federated Learning Meets the Parameter-Efficient Tuning Methods of Pre-trained Language Models

被引:0
|
作者
Zhang, Zhuo [1 ,2 ]
Yang, Yuanhang [1 ]
Dai, Yong [4 ]
Wang, Qifan [5 ]
Yu, Yue [2 ]
Que, Lizhen [3 ]
Xu, Zenglin [1 ,2 ]
机构
[1] Harbin Inst Technol, Shenzhen, Peoples R China
[2] Peng Cheng Lab, Shenzhen, Peoples R China
[3] Monash Univ, Melbourne, Vic, Australia
[4] Tencent, Shenzhen, Peoples R China
[5] Meta AI, Burlingame, CA USA
关键词
D O I
暂无
中图分类号
学科分类号
摘要
With increasing concerns about data privacy, there is an increasing necessity of fine-tuning pre-trained language models (PLMs) for adapting to downstream tasks located in end-user devices or local clients without transmitting data to the central server. This urgent necessity therefore calls the research of investigating federated learning (FL) for PLMs. However, large PLMs bring the curse of prohibitive communication overhead and local model adaptation costs for the FL system. To this end, we investigate the parameter-efficient tuning (PETuning) of PLMs and develop a corresponding federated benchmark for four representative PETuning methods, dubbed FedPETuning. Specifically, FedPETuning provides the first holistic empirical study of representative PLMs tuning methods in FL, covering privacy attacks, performance comparisons, and resource-constrained analysis. Intensive experimental results have indicated that FedPETuning can efficiently defend against privacy attacks and maintains acceptable performance with reducing heavy resource consumption. The open-source code and data are available at https://github.com/SMILELab-FL/FedPETuning.
引用
收藏
页码:9963 / 9977
页数:15
相关论文
共 50 条
  • [41] Fine-Tuning Pre-Trained Language Models with Gaze Supervision
    Deng, Shuwen
    Prasse, Paul
    Reich, David R.
    Scheffer, Tobias
    Jager, Lena A.
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 2: SHORT PAPERS, 2024, : 217 - 224
  • [42] MediSwift: Efficient Sparse Pre-trained Biomedical Language Models
    Thangarasa, Vithursan
    Salem, Mahmoud
    Saxena, Shreyas
    Leong, Kevin
    Hestness, Joel
    Lie, Sean
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 214 - 230
  • [43] Structured Pruning for Efficient Generative Pre-trained Language Models
    Tao, Chaofan
    Hou, Lu
    Bai, Haoli
    Wei, Jiansheng
    Jiang, Xin
    Liu, Qun
    Lu, Ping
    Wong, Ngai
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 10880 - 10895
  • [44] Shadclips: When Parameter-Efficient Fine-Tuning with Multimodal Meets Shadow Removal
    Zhang, Xiaofeng
    Gu, Chaochen
    Xu, Zishan
    Tang, Hao
    Cheng, Hao
    Wu, Kaijie
    Zhu, Shanying
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2024, 38 (16)
  • [45] The Impact of Training Methods on the Development of Pre-Trained Language Models
    Uribe, Diego
    Cuan, Enrique
    Urquizo, Elisa
    COMPUTACION Y SISTEMAS, 2024, 28 (01): : 109 - 124
  • [46] Stealing Knowledge from Pre-trained Language Models for Federated Classifier Debiasing
    Zhu, Meilu
    Yang, Qiushi
    Gao, Zhifan
    Liu, Jun
    Yuan, Yixuan
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2024, PT X, 2024, 15010 : 685 - 695
  • [47] Bi-tuning: Efficient Transfer from Pre-trained Models
    Zhong, Jincheng
    Ma, Haoyu
    Wang, Ximei
    Kou, Zhi
    Long, Mingsheng
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT V, 2023, 14173 : 357 - 373
  • [48] Meta Distant Transfer Learning for Pre-trained Language Models
    Wang, Chengyu
    Pan, Haojie
    Qiu, Minghui
    Yang, Fei
    Huang, Jun
    Zhang, Yin
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 9742 - 9752
  • [49] Empowering Legal Citation Recommendation via Efficient Instruction-Tuning of Pre-trained Language Models
    Wang, Jie
    Bansal, Kanha
    Arapakis, Ioannis
    Ge, Xuri
    Jose, Joemon M.
    ADVANCES IN INFORMATION RETRIEVAL, ECIR 2024, PT I, 2024, 14608 : 310 - 324
  • [50] Personalised soft prompt tuning in pre-trained language models: Bridging multitask transfer learning and crowdsourcing learning
    Tian, Zeshu
    Zhang, Hongli
    Wang, Yan
    KNOWLEDGE-BASED SYSTEMS, 2024, 305