Communication-Efficient Vertical Federated Learning via Compressed Error Feedback

被引:0
|
作者
Valdeira, Pedro [1 ,2 ,3 ]
Xavier, Joao [2 ]
Soares, Claudia [4 ]
Chi, Yuejie [1 ]
机构
[1] Carnegie Mellon Univ, Dept Elect & Comp Engn, Pittsburgh, PA 15213 USA
[2] Univ Lisbon, Inst Super Tecn, P-1049001 Lisbon, Portugal
[3] Inst Syst & Robot, Lab Robot & Engn Syst, P-1600011 Lisbon, Portugal
[4] Univ Nova Lisboa, NOVA Sch Sci & Technol, Dept Comp Sci, P-2829516 Caparica, Portugal
基金
美国国家科学基金会;
关键词
Servers; Compressors; Training; Convergence; Vectors; Federated learning; Receivers; Optimization methods; Electronic mail; Data models; Vertical federated learning; nonconvex optimization; communication-compressed optimization; QUANTIZATION;
D O I
10.1109/TSP.2025.3540655
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Communication overhead is a known bottleneck in federated learning (FL). To address this, lossy compression is commonly used on the information communicated between the server and clients during training. In horizontal FL, where each client holds a subset of the samples, such communication-compressed training methods have recently seen significant progress. However, in their vertical FL counterparts, where each client holds a subset of the features, our understanding remains limited. To address this, we propose an error feedback compressed vertical federated learning (EF-VFL) method to train split neural networks. In contrast to previous communication-compressed methods for vertical FL, EF-VFL does not require a vanishing compression error for the gradient norm to converge to zero for smooth nonconvex problems. By leveraging error feedback, our method can achieve a O(1/T) convergence rate for a sufficiently large batch size, improving over the state-of-the-art O(1/root T) rate under O(1/root T) compression error, and matching the rate of uncompressed methods. Further, when the objective function satisfies the Polyak-& Lstrok;ojasiewicz inequality, our method converges linearly. In addition to improving convergence, our method also supports the use of private labels. Numerical experiments show that EF-VFL significantly improves over the prior art, confirming our theoretical results.
引用
收藏
页码:1065 / 1080
页数:16
相关论文
共 50 条
  • [41] Communication-Efficient Federated Edge Learning via Optimal Probabilistic Device Scheduling
    Zhang, Maojun
    Zhu, Guangxu
    Wang, Shuai
    Jiang, Jiamo
    Liao, Qing
    Zhong, Caijun
    Cui, Shuguang
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2022, 21 (10) : 8536 - 8551
  • [42] Communication-Efficient and Secure Federated Learning Based on Adaptive One-Bit Compressed Sensing
    Xiao, Di
    Tan, Xue
    Li, Min
    INFORMATION SECURITY, ISC 2022, 2022, 13640 : 491 - 508
  • [43] Federated Learning with Autotuned Communication-Efficient Secure Aggregation
    Bonawitz, Keith
    Salehi, Fariborz
    Konecny, Jakub
    McMahan, Brendan
    Gruteser, Marco
    CONFERENCE RECORD OF THE 2019 FIFTY-THIRD ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, 2019, : 1222 - 1226
  • [44] ALS Algorithm for Robust and Communication-Efficient Federated Learning
    Hurley, Neil
    Duriakova, Erika
    Geraci, James
    O'Reilly-Morgan, Diarmuid
    Tragos, Elias
    Smyth, Barry
    Lawlor, Aonghus
    PROCEEDINGS OF THE 2024 4TH WORKSHOP ON MACHINE LEARNING AND SYSTEMS, EUROMLSYS 2024, 2024, : 56 - 64
  • [45] Communication-Efficient Federated Learning For Massive MIMO Systems
    Mu, Yuchen
    Garg, Navneet
    Ratnarajah, Tharmalingam
    2022 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2022, : 578 - 583
  • [46] On the Design of Communication-Efficient Federated Learning for Health Monitoring
    Chu, Dong
    Jaafar, Wael
    Yanikomeroglu, Halim
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 1128 - 1133
  • [47] Communication-Efficient Design for Quantized Decentralized Federated Learning
    Chen, Li
    Liu, Wei
    Chen, Yunfei
    Wang, Weidong
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2024, 72 : 1175 - 1188
  • [48] FedHe: Heterogeneous Models and Communication-Efficient Federated Learning
    Chan, Yun Hin
    Ngai, Edith C. H.
    2021 17TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING (MSN 2021), 2021, : 207 - 214
  • [49] FedADP: Communication-Efficient by Model Pruning for Federated Learning
    Liu, Haiyang
    Shi, Yuliang
    Su, Zhiyuan
    Zhang, Kun
    Wang, Xinjun
    Yan, Zhongmin
    Kong, Fanyu
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 3093 - 3098
  • [50] Communication-Efficient Robust Federated Learning with Noisy Labels
    Li, Junyi
    Pei, Jian
    Huang, Heng
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 914 - 924