SHARE: Shaping Data Distribution at Edge for Communication-Efficient Hierarchical Federated Learning

被引:35
|
作者
Deng, Yongheng [1 ]
Lyu, Feng [2 ]
Ren, Ju [2 ]
Zhang, Yongmin [2 ]
Zhou, Yuezhi [1 ]
Zhang, Yaoxue [1 ]
Yang, Yuanyuan [3 ]
机构
[1] Tsinghua Univ, BNRist, Dept Comp Sci & Technol, Beijing, Peoples R China
[2] Cent South Univ, Sch Comp Sci & Engn, Changsha, Peoples R China
[3] SUNY Stony Brook, Dept Elect & Comp Engn, Stony Brook, NY USA
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
D O I
10.1109/ICDCS51616.2021.00012
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) can enable distributed model training over mobile nodes without sharing privacy-sensitive raw data. However, to achieve efficient FL, one significant challenge is the prohibitive communication overhead to commit model updates since frequent cloud model aggregations are usually required to reach a target accuracy, especially when the data distributions at mobile nodes are imbalanced. With pilot experiments, it is verified that frequent cloud model aggregations can be avoided without performance degradation if model aggregations can be conducted at edge. To this end, we shed light on the hierarchical federated learning (HFL) framework, where a subset of distributed nodes are selected as edge aggregators to conduct edge aggregations. Particularly, under the HFL framework, we formulate a communication cost minimization (CCM) problem to minimize the communication cost raised by edge/cloud aggregations with making decisions on edge aggregator selection and distributed node association. Inspired by the insight that the potential of HFL lies in the data distribution at edge aggregators, we propose SHARE, i.e., SHaping dAta distRibution at Edge, to transform and solve the CCM problem. In SHARE, we divide the original problem into two sub-problems to minimize the per-round communication cost and mean Kullback-Leibler divergence of edge aggregator data, and devise two light-weight algorithms to solve them, respectively. Extensive experiments under various settings are carried out to corroborate the efficacy of SHARE.
引用
收藏
页码:24 / 34
页数:11
相关论文
共 50 条
  • [1] A Communication-Efficient Hierarchical Federated Learning Framework via Shaping Data Distribution at Edge
    Deng, Yongheng
    Lyu, Feng
    Xia, Tengxi
    Zhou, Yuezhi
    Zhang, Yaoxue
    Ren, Ju
    Yang, Yuanyuan
    [J]. IEEE-ACM TRANSACTIONS ON NETWORKING, 2024, 32 (03) : 2600 - 2615
  • [2] Communication-efficient and Scalable Decentralized Federated Edge Learning
    Yapp, Austine Zong Han
    Koh, Hong Soo Nicholas
    Lai, Yan Ting
    Kang, Jiawen
    Li, Xuandi
    Ng, Jer Shyuan
    Jiang, Hongchao
    Lim, Wei Yang Bryan
    Xiong, Zehui
    Niyato, Dusit
    [J]. PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 5032 - 5035
  • [3] Communication-efficient hierarchical federated learning for IoT heterogeneous systems with imbalanced data
    Abdellatif, Alaa Awad
    Mhaisen, Naram
    Mohamed, Amr
    Erbad, Aiman
    Guizani, Mohsen
    Dawy, Zaher
    Nasreddine, Wassim
    [J]. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2022, 128 : 406 - 419
  • [4] Communication-efficient federated learning
    Chen, Mingzhe
    Shlezinger, Nir
    Poor, H. Vincent
    Eldar, Yonina C.
    Cui, Shuguang
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2021, 118 (17)
  • [5] Communication-Efficient Federated Learning for Wireless Edge Intelligence in IoT
    Mills, Jed
    Hu, Jia
    Min, Geyong
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (07): : 5986 - 5994
  • [6] Coded Federated Learning for Communication-Efficient Edge Computing: A Survey
    Zhang, Yiqian
    Gao, Tianli
    Li, Congduan
    Tan, Chee Wei
    [J]. IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY, 2024, 5 : 4098 - 4124
  • [7] LGCM: A Communication-Efficient Scheme for Federated Learning in Edge Devices
    Saadat, Nafas Gul
    Thahir, Sameer Mohamed
    Kumar, Santhosh G.
    Jereesh, A. S.
    [J]. 2022 IEEE 19TH INDIA COUNCIL INTERNATIONAL CONFERENCE, INDICON, 2022,
  • [8] Communication-Efficient Federated Learning for Resource-Constrained Edge Devices
    Lan, Guangchen
    Liu, Xiao-Yang
    Zhang, Yijing
    Wang, Xiaodong
    [J]. IEEE Transactions on Machine Learning in Communications and Networking, 2023, 1 : 210 - 224
  • [9] FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning
    Xiong, Yuanhao
    Wang, Ruochen
    Cheng, Minhao
    Yu, Felix
    Hsieh, Cho-Jui
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 16323 - 16332
  • [10] Toward Communication-Efficient Federated Learning in the Internet of Things With Edge Computing
    Sun, Haifeng
    Li, Shiqi
    Yu, F. Richard
    Qi, Qi
    Wang, Jingyu
    Liao, Jianxin
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (11): : 11053 - 11067