Communication-Efficient Federated Learning for Resource-Constrained Edge Devices

被引:3
|
作者
Lan, Guangchen [1 ]
Liu, Xiao-Yang [1 ]
Zhang, Yijing [1 ]
Wang, Xiaodong [1 ]
机构
[1] Columbia University, Department of Electrical Engineering, New York,NY,10027, United States
关键词
Computing power - Cost reduction - Iterative methods - Network coding - Neural network models - Tensors;
D O I
10.1109/TMLCN.2023.3309773
中图分类号
学科分类号
摘要
Federated learning (FL) is an emerging paradigm to train a global deep neural network (DNN) model by collaborative clients that store their private data locally through the coordination of a central server. A major challenge is a high communication overhead during the training stage, especially when the clients are edge devices that are linked wirelessly to the central server. In this paper, we propose efficient techniques to reduce the communication overhead of FL from three perspectives. First, to reduce the amount of data being exchanged between clients and the central server, we propose employing low-rank tensor models to represent neural networks to substantially reduce the model parameter size, leading to significant reductions in both computational complexity and communication overhead. Then, we consider two edge scenarios and propose the corresponding FL schemes over wireless channels. The first scenario is that the edge devices barely have sufficient computing and communication capabilities, and we propose a lattice-coded over-the-air computation scheme for the clients to transmit their local model parameters to the server. Compared with the traditional repetition transmission, this scheme significantly reduces the distortion. The second scenario is that the edge devices have very limited computing and communication power, and we propose natural gradient-based FL, that involves forward pass only, and each client transmits only one scalar to the server at each training iteration. Numerical results on the MNIST data set and the CIFAR-10 data set are provided to demonstrate the effectiveness of the proposed communication-efficient FL techniques, in that they significantly reduce the communication overhead while maintaining high learning performance. © 2023 CCBY.
引用
收藏
页码:210 / 224
相关论文
共 50 条
  • [1] Communication-efficient asynchronous federated learning in resource-constrained edge computing
    Liu, Jianchun
    Xu, Hongli
    Xu, Yang
    Ma, Zhenguo
    Wang, Zhiyuan
    Qian, Chen
    Huang, He
    [J]. COMPUTER NETWORKS, 2021, 199
  • [2] Efficient federated learning on resource-constrained edge devices based on model pruning
    Wu, Tingting
    Song, Chunhe
    Zeng, Peng
    [J]. COMPLEX & INTELLIGENT SYSTEMS, 2023, 9 (06) : 6999 - 7013
  • [3] Efficient federated learning on resource-constrained edge devices based on model pruning
    Tingting Wu
    Chunhe Song
    Peng Zeng
    [J]. Complex & Intelligent Systems, 2023, 9 : 6999 - 7013
  • [4] Efficient Privacy-Preserving Federated Learning for Resource-Constrained Edge Devices
    Wu, Jindi
    Xia, Qi
    Li, Qun
    [J]. 2021 17TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING (MSN 2021), 2021, : 191 - 198
  • [5] Efficient knowledge management for heterogeneous federated continual learning on resource-constrained edge devices
    Yang, Zhao
    Zhang, Shengbing
    Li, Chuxi
    Wang, Miao
    Wang, Haoyang
    Zhang, Meng
    [J]. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 156 : 16 - 29
  • [6] LGCM: A Communication-Efficient Scheme for Federated Learning in Edge Devices
    Saadat, Nafas Gul
    Thahir, Sameer Mohamed
    Kumar, Santhosh G.
    Jereesh, A. S.
    [J]. 2022 IEEE 19TH INDIA COUNCIL INTERNATIONAL CONFERENCE, INDICON, 2022,
  • [7] Communication-Efficient Federated Learning with Heterogeneous Devices
    Chen, Zhixiong
    Yi, Wenqiang
    Liu, Yuanwei
    Nallanathan, Arumugam
    [J]. ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 3602 - 3607
  • [8] FedComp: A Federated Learning Compression Framework for Resource-Constrained Edge Computing Devices
    Wu, Donglei
    Yang, Weihao
    Jin, Haoyu
    Zou, Xiangyu
    Xia, Wen
    Fang, Binxing
    [J]. IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2024, 43 (01) : 230 - 243
  • [9] A Survey on Federated Learning for Resource-Constrained IoT Devices
    Imteaj, Ahmed
    Thakker, Urmish
    Wang, Shiqiang
    Li, Jian
    Amini, M. Hadi
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (01) : 1 - 24
  • [10] Communication-efficient and Scalable Decentralized Federated Edge Learning
    Yapp, Austine Zong Han
    Koh, Hong Soo Nicholas
    Lai, Yan Ting
    Kang, Jiawen
    Li, Xuandi
    Ng, Jer Shyuan
    Jiang, Hongchao
    Lim, Wei Yang Bryan
    Xiong, Zehui
    Niyato, Dusit
    [J]. PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 5032 - 5035