Powering Multi-Task Federated Learning with Competitive GPU Resource Sharing

被引:0
|
作者
Yu, Yongbo [1 ]
Yu, Fuxun [1 ]
Xu, Zirui [1 ]
Wang, Di [2 ]
Zhang, Mingjia [2 ]
Li, Ang [3 ]
Bray, Shawn [4 ]
Liu, Chenchen [4 ]
Chen, Xiang [1 ]
机构
[1] George Mason Univ, Fairfax, VA 22030 USA
[2] Microsoft, Redmond, WA USA
[3] Duke Univ, Durham, NC USA
[4] Univ Maryland, Baltimore, MD USA
关键词
Federated Learning; Multi-Task Learning; GPU Resource Allocation;
D O I
10.1145/3487553.3524859
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning (FL) nowadays involves compound learning tasks as cognitive applications' complexity increases. For example, a self-driving system hosts multiple tasks simultaneously (e.g., detection, classification, etc.) and expects FL to retain life-long intelligence involvement. However, our analysis demonstrates that, when deploying compound FL models for multiple training tasks on a GPU, certain issues arise: (1) As different tasks' skewed data distributions and corresponding models cause highly imbalanced learning workloads, current GPU scheduling methods lack effective resource allocations; (2) Therefore, existing FL schemes, only focusing on heterogeneous data distribution but runtime computing, cannot practically achieve optimally synchronized federation. To address these issues, we propose a full-stack FL optimization scheme to address both intra-device GPU scheduling and inter-device FL co-ordination for multi-task training. Specifically, our works illustrate two key insights in this research domain: (1) Competitive resource sharing is beneficial for parallel model executions, and the proposed concept of "virtual resource" could effectively characterize and guide the practical per-task resource utilization and allocation. (2) FL could be further improved by taking architectural level coordination into consideration. Our experiments demonstrate that the FL throughput could be significantly escalated.
引用
收藏
页码:567 / 571
页数:5
相关论文
共 50 条
  • [21] FEDERATED MULTI-TASK LEARNING FOR THZ WIDEBAND CHANNEL AND DOA ESTIMATION
    Elbir, Ahmet M.
    Shi, Wei
    Mishra, Kumar Vijay
    Chatzinotas, Symeon
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,
  • [22] FedGradNorm: Personalized Federated Gradient-Normalized Multi-Task Learning
    Mortaheb, Matin
    Vahapoglu, Cemil
    Ulukus, Sennur
    [J]. 2022 IEEE 23RD INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING ADVANCES IN WIRELESS COMMUNICATION (SPAWC), 2022,
  • [23] A Personalized Federated Multi-task Learning Scheme for Encrypted Traffic Classification
    Guan, Xueyu
    Du, Run
    Wang, Xiaohan
    Qu, Haipeng
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT III, 2023, 14256 : 258 - 270
  • [24] PFedSA: Personalized Federated Multi-Task Learning via Similarity Awareness
    Ye, Chuyao
    Zheng, Hao
    Hu, Zhigang
    Zheng, Meiguang
    [J]. 2023 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM, IPDPS, 2023, : 480 - 488
  • [25] Multi-task Federated Learning Medical Analysis Algorithm Integrated Into Adapter
    Zhao, Yuyuan
    Zhao, Tian
    Xiang, Peng
    Li, Qingshan
    Chen, Zhong
    [J]. 2023 IEEE 8TH INTERNATIONAL CONFERENCE ON BIG DATA ANALYTICS, ICBDA, 2023, : 24 - 30
  • [26] Federated Multi-Task Learning with Non-Stationary Heterogeneous Data
    Zhang, Hongwei
    Tao, Meixia
    Shi, Yuanming
    Bi, Xiaoyan
    [J]. IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 4950 - 4955
  • [27] Federated Multi-task Learning with Hierarchical Attention for Sensor Data Analytics
    Chen, Yujing
    Ning, Yue
    Chai, Zheng
    Rangwala, Huzefa
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [28] Personalized Federated Multi-Task Learning over Wireless Fading Channels
    Mortaheb, Matin
    Vahapoglu, Cemil
    Ulukus, Sennur
    [J]. ALGORITHMS, 2022, 15 (11)
  • [29] Joint Client Selection and Task Assignment for Multi-Task Federated Learning in MEC Networks
    Cheng, Zhipeng
    Min, Minghui
    Liwang, Minghui
    Gao, Zhibin
    Huang, Lianfen
    [J]. 2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [30] Multi-task gradient descent for multi-task learning
    Lu Bai
    Yew-Soon Ong
    Tiantian He
    Abhishek Gupta
    [J]. Memetic Computing, 2020, 12 : 355 - 369