MAS: Towards Resource-Efficient Federated Multiple-Task Learning

被引:1
|
作者
Zhuang, Weiming [1 ,4 ]
Wen, Yonggang [2 ]
Lyu, Lingjuan [1 ]
Zhang, Shuai [3 ]
机构
[1] Sony AI, Tokyo, Japan
[2] Nanyang Technol Univ, Singapore, Singapore
[3] SenseTime Res, Hong Kong, Peoples R China
[4] Nanyang Technol Univ, S Lab, Singapore, Singapore
关键词
D O I
10.1109/ICCV51070.2023.02140
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning (FL) is an emerging distributed machine learning method that empowers in-situ model training on decentralized edge devices. However, multiple simultaneous FL tasks could overload resource-constrained devices. In this work, we propose the first FL system to effectively coordinate and train multiple simultaneous FL tasks. We first formalize the problem of training simultaneous FL tasks. Then, we present our new approach, MAS (Merge and Split), to optimize the performance of training multiple simultaneous FL tasks. MAS starts by merging FL tasks into an all-in-one FL task with a multi-task architecture. After training for a few rounds, MAS splits the all-in-one FL task into two or more FL tasks by using the affinities among tasks measured during the all-in-one training. It then continues training each split of FL tasks based on model parameters from the all-in-one training. Extensive experiments demonstrate that MAS outperforms other methods while reducing training time by 2x and reducing energy consumption by 40%. We hope this work will inspire the community to further study and optimize training simultaneous FL tasks.
引用
收藏
页码:23357 / 23367
页数:11
相关论文
共 50 条
  • [1] REFL: Resource-Efficient Federated Learning
    Abdelmoniem, Ahmed M.
    Sahu, Atal Narayan
    Canini, Marco
    Fahmy, Suhaib A.
    [J]. PROCEEDINGS OF THE EIGHTEENTH EUROPEAN CONFERENCE ON COMPUTER SYSTEMS, EUROSYS 2023, 2023, : 215 - 232
  • [2] Towards a resource-efficient semi-asynchronous federated learning for heterogeneous devices
    Sasindran, Zitha
    Yelchuri, Harsha
    Prabhakar, T. V.
    [J]. 2024 NATIONAL CONFERENCE ON COMMUNICATIONS, NCC, 2024,
  • [3] Resource-Efficient Federated Learning for Network Intrusion Detection
    Doriguzzi-Corin, Roberto
    Cretti, Silvio
    Siracusa, Domenico
    [J]. 2024 IEEE 10TH INTERNATIONAL CONFERENCE ON NETWORK SOFTWARIZATION, NETSOFT 2024, 2024, : 357 - 362
  • [4] RESOURCE-EFFICIENT FEDERATED LEARNING ROBUST TO COMMUNICATION ERRORS
    Lari, Ehsan
    Gogineni, Vinay Chakravarthi
    Arablouei, Reza
    Werner, Stefan
    [J]. 2023 IEEE STATISTICAL SIGNAL PROCESSING WORKSHOP, SSP, 2023, : 265 - 269
  • [5] Toward Resource-Efficient Federated Learning in Mobile Edge Computing
    Yu, Rong
    Li, Peichun
    [J]. IEEE NETWORK, 2021, 35 (01): : 148 - 155
  • [6] FedTrip: A Resource-Efficient Federated Learning Method with Triplet Regularization
    Li, Xujing
    Liu, Min
    Sun, Sheng
    Wang, Yuwei
    Jiang, Hui
    Jiang, Xuefeng
    [J]. 2023 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM, IPDPS, 2023, : 809 - 819
  • [7] Resource-Efficient Federated Learning with Hierarchical Aggregation in Edge Computing
    Wang, Zhiyuan
    Xu, Hongli
    Liu, Jianchun
    Huang, He
    Qiao, Chunming
    Zhao, Yangming
    [J]. IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2021), 2021,
  • [8] FLrce: Resource-Efficient Federated Learning With Early-Stopping Strategy
    Niu, Ziru
    Dong, Hai
    Qin, A.K.
    Gu, Tao
    [J]. IEEE Transactions on Mobile Computing, 2024, 23 (12) : 14514 - 14529
  • [9] Towards Resource-Efficient Edge AI: From Federated Learning to Semi-Supervised Model Personalization
    Zhang, Zhaofeng
    Yue, Sheng
    Zhang, Junshan
    [J]. IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (05) : 6104 - 6115
  • [10] Resource-Efficient Hierarchical Collaborative Federated Learning in Heterogeneous Internet of Things
    Wang, Ruyan
    Chen, Wei
    Zhang, Puning
    Wu, Dapeng
    Yang, Zhigang
    [J]. JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2023, 45 (08) : 2847 - 2855