Communication-Efficient Vertical Federated Learning via Compressed Error Feedback

被引:0
|
作者
Valdeira, Pedro [1 ,2 ,3 ]
Xavier, Joao [2 ]
Soares, Claudia [4 ]
Chi, Yuejie [1 ]
机构
[1] Carnegie Mellon Univ, Dept Elect & Comp Engn, Pittsburgh, PA 15213 USA
[2] Univ Lisbon, Inst Super Tecn, P-1049001 Lisbon, Portugal
[3] Inst Syst & Robot, Lab Robot & Engn Syst, P-1600011 Lisbon, Portugal
[4] Univ Nova Lisboa, NOVA Sch Sci & Technol, Dept Comp Sci, P-2829516 Caparica, Portugal
基金
美国国家科学基金会;
关键词
Servers; Compressors; Training; Convergence; Vectors; Federated learning; Receivers; Optimization methods; Electronic mail; Data models; Vertical federated learning; nonconvex optimization; communication-compressed optimization; QUANTIZATION;
D O I
10.1109/TSP.2025.3540655
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Communication overhead is a known bottleneck in federated learning (FL). To address this, lossy compression is commonly used on the information communicated between the server and clients during training. In horizontal FL, where each client holds a subset of the samples, such communication-compressed training methods have recently seen significant progress. However, in their vertical FL counterparts, where each client holds a subset of the features, our understanding remains limited. To address this, we propose an error feedback compressed vertical federated learning (EF-VFL) method to train split neural networks. In contrast to previous communication-compressed methods for vertical FL, EF-VFL does not require a vanishing compression error for the gradient norm to converge to zero for smooth nonconvex problems. By leveraging error feedback, our method can achieve a O(1/T) convergence rate for a sufficiently large batch size, improving over the state-of-the-art O(1/root T) rate under O(1/root T) compression error, and matching the rate of uncompressed methods. Further, when the objective function satisfies the Polyak-& Lstrok;ojasiewicz inequality, our method converges linearly. In addition to improving convergence, our method also supports the use of private labels. Numerical experiments show that EF-VFL significantly improves over the prior art, confirming our theoretical results.
引用
收藏
页码:1065 / 1080
页数:16
相关论文
共 50 条
  • [21] VFL-Cafe: Communication-Efficient Vertical Federated Learning via Dynamic Caching and Feature Selection
    Zhou, Jiahui
    Liang, Han
    Wu, Tian
    Zhang, Xiaoxi
    Jiang, Yu
    Tan, Chee Wei
    ENTROPY, 2025, 27 (01)
  • [22] Communication-Efficient Federated Learning via Regularized Sparse Random Networks
    Mestoukirdi, Mohamad
    Esrafilian, Omid
    Gesbert, David
    Li, Qianrui
    Gresset, Nicolas
    IEEE COMMUNICATIONS LETTERS, 2024, 28 (07) : 1574 - 1578
  • [23] Communication-efficient Federated Learning with Privacy Enhancing via Probabilistic Scheduling
    Zhou, Ziao
    Huang, Shaoming
    Wu, Youlong
    Wen, Dingzhu
    Wang, Ting
    Cai, Haibin
    Shi, Yuanming
    2024 IEEE/CIC INTERNATIONAL CONFERENCE ON COMMUNICATIONS IN CHINA, ICCC, 2024,
  • [24] FedTCR: communication-efficient federated learning via taming computing resources
    Kaiju Li
    Hao Wang
    Qinghua Zhang
    Complex & Intelligent Systems, 2023, 9 : 5199 - 5219
  • [25] Towards Communication-efficient Vertical Federated Learning Training via Cache-enabled Local Updates
    Fu, Fangcheng
    Miao, Xupeng
    Jiang, Jiawei
    Xue, Huanran
    Cui, Bin
    PROCEEDINGS OF THE VLDB ENDOWMENT, 2022, 15 (10): : 2111 - 2120
  • [26] Communication-efficient federated learning method via redundant data elimination
    Li K.
    Xu Q.
    Wang H.
    Tongxin Xuebao/Journal on Communications, 2023, 44 (05): : 79 - 93
  • [27] LESS-VFL: Communication-Efficient Feature Selection for Vertical Federated Learning
    Castiglia, Timothy
    Zhou, Yi
    Wang, Shiqiang
    Kadhe, Swanand
    Baracaldo, Nathalie
    Patterson, Stacy
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 202, 2023, 202
  • [28] Communication-Efficient Personalized Federated Edge Learning for Massive MIMO CSI Feedback
    Cui, Yiming
    Guo, Jiajia
    Wen, Chao-Kai
    Jin, Shi
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (07) : 7362 - 7375
  • [29] FedTCR: communication-efficient federated learning via taming computing resources
    Li, Kaiju
    Wang, Hao
    Zhang, Qinghua
    COMPLEX & INTELLIGENT SYSTEMS, 2023, 9 (05) : 5199 - 5219
  • [30] FedTC: Enabling Communication-Efficient Federated Learning via Transform Coding
    Guan, Yixuan
    Liu, Xuefeng
    Niu, Jianwei
    Ren, Tao
    IEEE INFOCOM 2024-IEEE CONFERENCE ON COMPUTER COMMUNICATIONS, 2024, : 821 - 830