Multi-Task Federated Learning for Personalised Deep Neural Networks in Edge Computing

被引:111
|
作者
Mills, Jed [1 ]
Hu, Jia [1 ]
Min, Geyong [1 ]
机构
[1] Univ Exeter, Dept Comp Sci, Exeter EX4 4QF, Devon, England
基金
英国工程与自然科学研究理事会; 欧盟地平线“2020”;
关键词
Federated learning; multi-task learning; deep learning; edge computing; adaptive optimization;
D O I
10.1109/TPDS.2021.3098467
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Federated Learning (FL) is an emerging approach for collaboratively training Deep Neural Networks (DNNs) on mobile devices, without private user data leaving the devices. Previous works have shown that non-Independent and Identically Distributed (non-IID) user data harms the convergence speed of the FL algorithms. Furthermore, most existing work on FL measures global-model accuracy, but in many cases, such as user content-recommendation, improving individual User model Accuracy (UA) is the real objective. To address these issues, we propose a Multi-Task FL (MTFL) algorithm that introduces non-federated Batch-Normalization (BN) layers into the federated DNN. MTFL benefits UA and convergence speed by allowing users to train models personalised to their own data. MTFL is compatible with popular iterative FL optimisation algorithms such as Federated Averaging (FedAvg), and we show empirically that a distributed form of Adam optimisation (FedAvg-Adam) benefits convergence speed even further when used as the optimisation strategy within MTFL. Experiments using MNIST and CIFAR10 demonstrate that MTFL is able to significantly reduce the number of rounds required to reach a target UA, by up to 5x when using existing FL optimisation strategies, and with a further 3x improvement when using FedAvg-Adam. We compare MTFL to competing personalised FL algorithms, showing that it is able to achieve the best UA for MNIST and CIFAR10 in all considered scenarios. Finally, we evaluate MTFL with FedAvg-Adam on an edge-computing testbed, showing that its convergence and UA benefits outweigh its overhead.
引用
收藏
页码:630 / 641
页数:12
相关论文
共 50 条
  • [1] Multi-Task Federated Edge Learning (MTFeeL) With SignSGD
    Mahara, Sawan Singh
    Shruti, M.
    Bharath, B. N.
    [J]. 2022 NATIONAL CONFERENCE ON COMMUNICATIONS (NCC), 2022, : 379 - 384
  • [2] Evolving Deep Parallel Neural Networks for Multi-Task Learning
    Wu, Jie
    Sun, Yanan
    [J]. ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2021, PT II, 2022, 13156 : 517 - 531
  • [3] FedICT: Federated Multi-Task Distillation for Multi-Access Edge Computing
    Wu, Zhiyuan
    Sun, Sheng
    Wang, Yuwei
    Liu, Min
    Pan, Quyang
    Jiang, Xuefeng
    Gao, Bo
    [J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2024, 35 (06) : 952 - 966
  • [4] Federated Multi-Task Learning
    Smith, Virginia
    Chiang, Chao-Kai
    Sanjabi, Maziar
    Talwalkar, Ameet
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [5] Multi-Agent Deep Reinforcement Learning Based Incentive Mechanism for Multi-Task Federated Edge Learning
    Zhao, Nan
    Pei, Yiyang
    Liang, Ying-Chang
    Niyato, Dusit
    [J]. IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (10) : 13530 - 13535
  • [6] Multi-Adaptive Optimization for multi-task learning with deep neural networks
    Hervella, alvaro S.
    Rouco, Jose
    Novo, Jorge
    Ortega, Marcos
    [J]. NEURAL NETWORKS, 2024, 170 : 254 - 265
  • [7] Deep Convolutional Neural Networks for Multi-Instance Multi-Task Learning
    Zeng, Tao
    Ji, Shuiwang
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2015, : 579 - 588
  • [8] Cell tracking using deep neural networks with multi-task learning
    He, Tao
    Mao, Hua
    Guo, Jixiang
    Yi, Zhang
    [J]. IMAGE AND VISION COMPUTING, 2017, 60 : 142 - 153
  • [9] MULTI-TASK LEARNING IN DEEP NEURAL NETWORKS FOR IMPROVED PHONEME RECOGNITION
    Seltzer, Michael L.
    Droppo, Jasha
    [J]. 2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, : 6965 - 6969
  • [10] Rapid Adaptation for Deep Neural Networks through Multi-Task Learning
    Huang, Zhen
    Li, Jinyu
    Siniscalchi, Sabato Marco
    Chen, I-Fan
    Wu, Ji
    Lee, Chin-Hui
    [J]. 16TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2015), VOLS 1-5, 2015, : 3625 - 3629