Towards Efficient Federated Learning: Layer-Wise Pruning-Quantization Scheme and Coding Design

被引:5
|
作者
Zhu, Zheqi [1 ,2 ]
Shi, Yuchen [1 ,2 ]
Xin, Gangtao [1 ,2 ]
Peng, Chenghui [3 ]
Fan, Pingyi [1 ,2 ]
Letaief, Khaled B. [4 ]
机构
[1] Tsinghua Univ, Dept Elect Engn, Beijing 100084, Peoples R China
[2] Tsinghua Univ, Beijing Natl Res Ctr Informat Sci & Technol BNRist, Beijing 100084, Peoples R China
[3] Huawei Technol, Wireless Technol Lab, Shanghai 201206, Peoples R China
[4] Hong Kong Univ Sci & Technol, Dept Elect & Comp Engn, Hong Kong, Peoples R China
关键词
federated learning; model pruning; parameter quantization; code design; layer-wise aggregation; communication-computation efficiency; CHALLENGES;
D O I
10.3390/e25081205
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
As a promising distributed learning paradigm, federated learning (FL) faces the challenge of communication-computation bottlenecks in practical deployments. In this work, we mainly focus on the pruning, quantization, and coding of FL. By adopting a layer-wise operation, we propose an explicit and universal scheme: FedLP-Q (federated learning with layer-wise pruning-quantization). Pruning strategies for homogeneity/heterogeneity scenarios, the stochastic quantization rule, and the corresponding coding scheme were developed. Both theoretical and experimental evaluations suggest that FedLP-Q improves the system efficiency of communication and computation with controllable performance degradation. The key novelty of FedLP-Q is that it serves as a joint pruning-quantization FL framework with layer-wise processing and can easily be applied in practical FL systems.
引用
收藏
页数:15
相关论文
共 37 条
  • [1] FedLP: Layer-wise Pruning Mechanism for Communication-Computation Efficient Federated Learning
    Zhu, Zheqi
    Shi, Yuchen
    Luo, Jiajun
    Wang, Fei
    Peng, Chenghui
    Fan, Pingyi
    Letaief, Khaled B.
    [J]. ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 1250 - 1255
  • [2] Learning to Search Efficient DenseNet with Layer-wise Pruning
    Zhang, Xuanyang
    Liu, Hao
    Zhu, Zhanxing
    Xu, Zenglin
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [3] Efficient Federated Learning Using Layer-Wise Regulation and Momentum Aggregation*
    Zhang, Fan
    Fang, Zekuan
    Li, Yiming
    Chen, Mingsong
    [J]. JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2022, 31 (18)
  • [4] FedLF: Layer-Wise Fair Federated Learning
    Pan, Zibin
    Li, Chi
    Yu, Fangchen
    Wang, Shuyi
    Wang, Haijin
    Tang, Xiaoying
    Zhao, Junhua
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 13, 2024, : 14527 - 14535
  • [5] Layer-Wise Personalized Federated Learning with Hypernetwork
    Suxia Zhu
    Tianyu Liu
    Guanglu Sun
    [J]. Neural Processing Letters, 2023, 55 (9) : 12273 - 12287
  • [6] Layer-Wise Personalized Federated Learning with Hypernetwork
    Zhu, Suxia
    Liu, Tianyu
    Sun, Guanglu
    [J]. NEURAL PROCESSING LETTERS, 2023, 55 (09) : 12273 - 12287
  • [7] eXtreme Federated Learning (XFL): a layer-wise approach
    El Mokadem, Rachid
    Ben Maissa, Yann
    El Akkaoui, Zineb
    [J]. CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2024, 27 (05): : 5741 - 5754
  • [8] A Layer-wise Training and Pruning Method for Memory Efficient On-chip Learning Hardware
    Lew, Dongwoo
    Park, Jongsun
    [J]. 2022 19TH INTERNATIONAL SOC DESIGN CONFERENCE (ISOCC), 2022, : 97 - 98
  • [9] Communication Efficient Personalized Federated Learning via Hierarchical Clustering and Layer-wise Aggregation
    Shuang, Mingchang
    Zhang, Zhe
    Zhao, Yanchao
    [J]. 2023 19TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING, MSN 2023, 2023, : 175 - 182
  • [10] FedScrap: Layer-Wise Personalized Federated Learning for Scrap Detection
    Zhang, Weidong
    Deng, Dongshang
    Wang, Lidong
    [J]. ELECTRONICS, 2024, 13 (03)