Towards Efficient Federated Learning: Layer-Wise Pruning-Quantization Scheme and Coding Design

被引:5
|
作者
Zhu, Zheqi [1 ,2 ]
Shi, Yuchen [1 ,2 ]
Xin, Gangtao [1 ,2 ]
Peng, Chenghui [3 ]
Fan, Pingyi [1 ,2 ]
Letaief, Khaled B. [4 ]
机构
[1] Tsinghua Univ, Dept Elect Engn, Beijing 100084, Peoples R China
[2] Tsinghua Univ, Beijing Natl Res Ctr Informat Sci & Technol BNRist, Beijing 100084, Peoples R China
[3] Huawei Technol, Wireless Technol Lab, Shanghai 201206, Peoples R China
[4] Hong Kong Univ Sci & Technol, Dept Elect & Comp Engn, Hong Kong, Peoples R China
关键词
federated learning; model pruning; parameter quantization; code design; layer-wise aggregation; communication-computation efficiency; CHALLENGES;
D O I
10.3390/e25081205
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
As a promising distributed learning paradigm, federated learning (FL) faces the challenge of communication-computation bottlenecks in practical deployments. In this work, we mainly focus on the pruning, quantization, and coding of FL. By adopting a layer-wise operation, we propose an explicit and universal scheme: FedLP-Q (federated learning with layer-wise pruning-quantization). Pruning strategies for homogeneity/heterogeneity scenarios, the stochastic quantization rule, and the corresponding coding scheme were developed. Both theoretical and experimental evaluations suggest that FedLP-Q improves the system efficiency of communication and computation with controllable performance degradation. The key novelty of FedLP-Q is that it serves as a joint pruning-quantization FL framework with layer-wise processing and can easily be applied in practical FL systems.
引用
收藏
页数:15
相关论文
共 37 条
  • [31] Incremental Layer-Wise Self-Supervised Learning for Efficient Unsupervised Speech Domain Adaptation On Device
    Huo, Zhouyuan
    Hwang, Dongseong
    Sim, Khe Chai
    Garg, Shefali
    Misra, Ananya
    Siddhartha, Nikhil
    Strohman, Trevor
    Beaufays, Francoise
    [J]. INTERSPEECH 2022, 2022, : 4845 - 4849
  • [32] L-DAWA: Layer-wise Divergence Aware Weight Aggregation in Federated Self-Supervised Visual Representation Learning
    Rehman, Yasar Abbas Ur
    Gao, Yan
    de Gusmao, Pedro Porto Buarque
    Alibeigi, Mina
    Shen, Jiajun
    Lane, Nicholas D.
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 16418 - 16427
  • [33] Towards efficient federated learning-based scheme in medicalcyber-physicalsystems for distributed data
    Guo, Kehua
    Li, Nan
    Kang, Jian
    Zhang, Jian
    [J]. SOFTWARE-PRACTICE & EXPERIENCE, 2021, 51 (11): : 2274 - 2289
  • [34] LTNN: An Energy-efficient Machine Learning Accelerator on 3D CMOS-RRAM for Layer-wise Tensorized Neural Network
    Huang, Hantao
    Ni, Leibin
    Yu, Hao
    [J]. 2017 30TH IEEE INTERNATIONAL SYSTEM-ON-CHIP CONFERENCE (SOCC), 2017, : 280 - 285
  • [35] Energy Efficient Federated Learning Over Heterogeneous Mobile Devices via Joint Design of Weight Quantization and Wireless Transmission
    Chen, Rui
    Li, Liang
    Xue, Kaiping
    Zhang, Chi
    Pan, Miao
    Fang, Yuguang
    [J]. IEEE TRANSACTIONS ON MOBILE COMPUTING, 2023, 22 (12) : 7451 - 7465
  • [36] An optimized and efficient multiuser data sharing using the selection scheme design secure approach and federated learning in cloud environment
    Patil, Shubangini
    Patil, Rekha
    [J]. INTERNATIONAL JOURNAL OF PERVASIVE COMPUTING AND COMMUNICATIONS, 2022,
  • [37] Towards Fast and Energy-Efficient Hierarchical Federated Edge Learning: A Joint Design for Helper Scheduling and Resource Allocation
    Went, Wanli
    Yang, Howard H.
    Xia, Wenchao
    Quek, Tony Q. S.
    [J]. IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 5378 - 5383