Learning Task-Specific Initialization for Effective Federated Continual Fine-Tuning of Foundation Model Adapters

被引:0
|
作者
Peng, Danni [1 ]
Wang, Yuan [1 ]
Fu, Huazhu [1 ]
Wee, Qingsong [1 ]
Liu, Yong [1 ]
Goh, Rick Siow Mong [1 ]
机构
[1] ASTAR, Inst High Performance Comp IHPC, Singapore, Singapore
关键词
Federated Continual Learning; Adapter Fine-Tuning; Knowledge Transfer; Task-Specific Initialization;
D O I
10.1109/CAI59869.2024.00153
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As large models demonstrate their power across a wide range of applications, the federated learning (FL) community has also begun to seek solutions for leveraging these large models in a communication- and computation-efficient manner. In light of this, fine-tuning of lightweight adapters has emerged as a promising solution for adopting large models in FL. Another real-world challenge concerns with non-static data streams encountered by local clients, requiring continuous adapter fine-tuning to accommodate new tasks. In this work, we propose a method for effective continual adapter fine-tuning in FL (FedCAF), aimed at enhancing a client's local learning on new tasks. Specifically, FedCAF employs both cross-task and cross-client knowledge transfer to generate an informed, task-specific initialization. By learning a set of attentive weights to combine past task models from all clients, FedCAF produces task-specific initializations that effectively enable better and faster task learning. On the large-scale cross-domain dataset DomainNet, we show that FedCAF significantly outperforms several competitive personalized and continual learning baselines under both class-incremental and domain-incremental settings.
引用
收藏
页码:811 / 816
页数:6
相关论文
共 50 条
  • [1] FedPFT: Federated Proxy Fine-Tuning of Foundation Models
    Peng, Zhaopeng
    Fan, Xiaoliang
    Chen, Yufan
    Wang, Zheng
    Pan, Shirui
    Wen, Chenglu
    Zhang, Ruisheng
    Wang, Cheng
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 4806 - 4814
  • [2] Learning to Generate Task-Specific Adapters from Task Description
    Ye, Qinyuan
    Ren, Xiang
    ACL-IJCNLP 2021: THE 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 2, 2021, : 646 - 653
  • [3] Task-Specific Fine-Tuning for Interactive Deep Learning Segmentation for Lung Fibrosis On CT Post Radiotherapy
    Trimpl, M.
    Salome, P.
    Walz, D.
    Hoerner-rieber, J.
    Regnery, S.
    Stride, E.
    Vallis, K.
    Debus, J.
    Abdollahi, A.
    Gooding, M.
    Knoll, M.
    MEDICAL PHYSICS, 2022, 49 (06) : E135 - E136
  • [4] Tangent Model Composition for Ensembling and Continual Fine-tuning
    Liu, Tian Yu
    Soatto, Stefano
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 18630 - 18640
  • [5] Kaizen: Practical self-supervised continual learning with continual fine-tuning
    Tang, Chi Ian
    Qendrol, Lorena
    Spathis, Dimitris
    Kawsar, Fahim
    Mascolo, Cecilia
    Mathur, Akhil
    2024 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION, WACV 2024, 2024, : 2829 - 2838
  • [6] Task-Specific Grasp Planning for Robotic Assembly by Fine-Tuning GQCNNs on Automatically Generated Synthetic Data
    Karoly, Artur Istvan
    Galambos, Peter
    APPLIED SCIENCES-BASEL, 2023, 13 (01):
  • [7] Enhancing Task Performance in Continual Instruction Fine-tuning Through Format Uniformity
    Tan, Xiaoyu
    Cheng, Leijun
    Qiu, Xihe
    Shi, Shaojie
    Cheng, Yuan
    Chu, Wei
    Xu, Yinghui
    Qi, Yuan
    PROCEEDINGS OF THE 47TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2024, 2024, : 2384 - 2389
  • [8] Fine-Tuning Network in Federated Learning for Personalized Skin Diagnosis
    Lee, Kyungsu
    Lee, Haeyun
    Cavalcanti, Thiago Coutinho
    Kim, Sewoong
    El Fakhri, Georges
    Lee, Dong Hun
    Woo, Jonghye
    Hwang, Jae Youn
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023, PT III, 2023, 14222 : 378 - 388
  • [9] FedFTHA: A Fine-Tuning and Head Aggregation Method in Federated Learning
    Wang, Yansong
    Xu, Hui
    Ali, Waqar
    Li, Miaobo
    Zhou, Xiangmin
    Shao, Jie
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (14) : 12749 - 12762
  • [10] Scaling Federated Learning for Fine-Tuning of Large Language Models
    Hilmkil, Agrin
    Callh, Sebastian
    Barbieri, Matteo
    Sutfeld, Leon Rene
    Zec, Edvin Listo
    Mogren, Olof
    NATURAL LANGUAGE PROCESSING AND INFORMATION SYSTEMS (NLDB 2021), 2021, 12801 : 15 - 23