SHARE: Shaping Data Distribution at Edge for Communication-Efficient Hierarchical Federated Learning

被引:35
|
作者
Deng, Yongheng [1 ]
Lyu, Feng [2 ]
Ren, Ju [2 ]
Zhang, Yongmin [2 ]
Zhou, Yuezhi [1 ]
Zhang, Yaoxue [1 ]
Yang, Yuanyuan [3 ]
机构
[1] Tsinghua Univ, BNRist, Dept Comp Sci & Technol, Beijing, Peoples R China
[2] Cent South Univ, Sch Comp Sci & Engn, Changsha, Peoples R China
[3] SUNY Stony Brook, Dept Elect & Comp Engn, Stony Brook, NY USA
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
D O I
10.1109/ICDCS51616.2021.00012
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) can enable distributed model training over mobile nodes without sharing privacy-sensitive raw data. However, to achieve efficient FL, one significant challenge is the prohibitive communication overhead to commit model updates since frequent cloud model aggregations are usually required to reach a target accuracy, especially when the data distributions at mobile nodes are imbalanced. With pilot experiments, it is verified that frequent cloud model aggregations can be avoided without performance degradation if model aggregations can be conducted at edge. To this end, we shed light on the hierarchical federated learning (HFL) framework, where a subset of distributed nodes are selected as edge aggregators to conduct edge aggregations. Particularly, under the HFL framework, we formulate a communication cost minimization (CCM) problem to minimize the communication cost raised by edge/cloud aggregations with making decisions on edge aggregator selection and distributed node association. Inspired by the insight that the potential of HFL lies in the data distribution at edge aggregators, we propose SHARE, i.e., SHaping dAta distRibution at Edge, to transform and solve the CCM problem. In SHARE, we divide the original problem into two sub-problems to minimize the per-round communication cost and mean Kullback-Leibler divergence of edge aggregator data, and devise two light-weight algorithms to solve them, respectively. Extensive experiments under various settings are carried out to corroborate the efficacy of SHARE.
引用
收藏
页码:24 / 34
页数:11
相关论文
共 50 条
  • [31] Ternary Compression for Communication-Efficient Federated Learning
    Xu, Jinjin
    Du, Wenli
    Jin, Yaochu
    He, Wangli
    Cheng, Ran
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (03) : 1162 - 1176
  • [32] Communication-Efficient Personalized Federated Learning on Non-IID Data
    Li, Xiangqian
    Ma, Chunmei
    Huang, Baogui
    Li, Guangshun
    [J]. 2023 19TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING, MSN 2023, 2023, : 562 - 569
  • [33] HPFL-CN: Communication-Efficient Hierarchical Personalized Federated Edge Learning via Complex Network Feature Clustering
    Li, Zijian
    Chen, Zihan
    Wei, Xiaohui
    Gao, Shang
    Ren, Chenghao
    Quek, Tony Q. S.
    [J]. 2022 19TH ANNUAL IEEE INTERNATIONAL CONFERENCE ON SENSING, COMMUNICATION, AND NETWORKING (SECON), 2022, : 325 - 333
  • [34] Communication-Efficient Federated Edge Learning for NR-U-Based IIoT Networks
    Chen, Qimei
    Xu, Xiaoxia
    You, Zehua
    Jiang, Hao
    Zhang, Jun
    Wang, Fei-Yue
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2021, 9 (14) : 12450 - 12459
  • [35] Communication-Efficient and Byzantine-Robust Federated Learning for Mobile Edge Computing Networks
    Zhang, Zhuangzhuang
    Wl, Libing
    He, Debiao
    Li, Jianxin
    Cao, Shuqin
    Wu, Xianfeng
    [J]. IEEE NETWORK, 2023, 37 (04): : 112 - 119
  • [36] Communication-Efficient Federated Learning For Massive MIMO Systems
    Mu, Yuchen
    Garg, Navneet
    Ratnarajah, Tharmalingam
    [J]. 2022 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2022, : 578 - 583
  • [37] Communication-efficient federated learning via knowledge distillation
    Wu, Chuhan
    Wu, Fangzhao
    Lyu, Lingjuan
    Huang, Yongfeng
    Xie, Xing
    [J]. NATURE COMMUNICATIONS, 2022, 13 (01)
  • [38] ALS Algorithm for Robust and Communication-Efficient Federated Learning
    Hurley, Neil
    Duriakova, Erika
    Geraci, James
    O'Reilly-Morgan, Diarmuid
    Tragos, Elias
    Smyth, Barry
    Lawlor, Aonghus
    [J]. PROCEEDINGS OF THE 2024 4TH WORKSHOP ON MACHINE LEARNING AND SYSTEMS, EUROMLSYS 2024, 2024, : 56 - 64
  • [39] Federated Learning with Autotuned Communication-Efficient Secure Aggregation
    Bonawitz, Keith
    Salehi, Fariborz
    Konecny, Jakub
    McMahan, Brendan
    Gruteser, Marco
    [J]. CONFERENCE RECORD OF THE 2019 FIFTY-THIRD ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, 2019, : 1222 - 1226
  • [40] On the Design of Communication-Efficient Federated Learning for Health Monitoring
    Chu, Dong
    Jaafar, Wael
    Yanikomeroglu, Halim
    [J]. 2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 1128 - 1133