Communication Efficient and Byzantine Tolerant Distributed Learning

被引:0
|
作者
Ghosh, Avishek [1 ]
Maity, Raj Kumar [2 ]
Kadhe, Swanand [1 ]
Mazumdar, Arya [2 ]
Ramachandran, Kannan [1 ]
机构
[1] Univ Calif Berkeley, Dept Elect Engn & Comp Sci, Berkeley, CA 94720 USA
[2] UMASS Amherst, Coll Informat & Comp Sci, Amherst, MA USA
关键词
D O I
10.1109/isit44484.2020.9174391
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
We develop a communication-efficient distributed learning algorithm that is robust against Byzantine worker machines. We propose and analyze a distributed gradient-descent algorithm that performs a simple thresholding based on gradient norms to mitigate Byzantine failures. We show the (statistical) error-rate of our algorithm matches that of Yin et al., 2018, which uses more complicated schemes (like coordinate-wise median or trimmed mean). Furthermore, for communication efficiency, we consider a generic class of delta-approximate compressors from Karimireddy et al., 2019, that encompasses sign-based compressors and top-k sparsification. Our algorithm uses compressed gradients and gradient norms for aggregation and Byzantine removal respectively. We establish the statistical error rate of the algorithm for arbitrary (convex or non-convex) smooth loss function. We show that, in certain regime of delta, the rate of convergence is not affected by the compression operation. We have experimentally validated our results and shown good performance in convergence for convex (least-square regression) and non-convex (neural network training) problems.
引用
收藏
页码:2545 / 2550
页数:6
相关论文
共 50 条
  • [11] Communication Efficient Coreset Sampling for Distributed Learning
    Fan, Yawen
    Li, Husheng
    2018 IEEE 19TH INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING ADVANCES IN WIRELESS COMMUNICATIONS (SPAWC), 2018, : 76 - 80
  • [12] Communication-Efficient Distributed Learning: An Overview
    Cao, Xuanyu
    Basar, Tamer
    Diggavi, Suhas
    Eldar, Yonina C.
    Letaief, Khaled B.
    Poor, H. Vincent
    Zhang, Junshan
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2023, 41 (04) : 851 - 873
  • [13] Byzantine-Robust and Communication-Efficient Personalized Federated Learning
    Zhang, Jiaojiao
    He, Xuechao
    Huang, Yue
    Ling, Qing
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2025, 73 : 26 - 39
  • [14] BYZANTINE FAULT TOLERANT DISTRIBUTED QUICKEST CHANGE DETECTION
    Bayraktar, Erhan
    Lai, Lifeng
    SIAM JOURNAL ON CONTROL AND OPTIMIZATION, 2015, 53 (02) : 575 - 591
  • [15] Byzantine-Tolerant Methods for Distributed Variational Inequalities
    Tupitsa, Nazarii
    Almansoori, Abdulla Jasem
    Wu, Yanlin
    Takac, Martin
    Nandakumar, Karthik
    Horvath, Samuel
    Gorbunov, Eduard
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [16] Optimal Mobile Byzantine Fault Tolerant Distributed Storage
    Bonomi, Silvia
    Del Pozzo, Antonella
    Potop-Butucaru, Maria
    Tixeuil, Sebastien
    PROCEEDINGS OF THE 2016 ACM SYMPOSIUM ON PRINCIPLES OF DISTRIBUTED COMPUTING (PODC'16), 2016, : 269 - 278
  • [17] BYZANTINE-ROBUST AND COMMUNICATION-EFFICIENT DISTRIBUTED NON-CONVEX LEARNING OVER NON-IID DATA
    He, Xuechao
    Zhu, Heng
    Ling, Qing
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 5223 - 5227
  • [18] Communication Efficient Distributed Learning Over Wireless Channels
    Achituve, Idan
    Wang, Wenbo
    Fetaya, Ethan
    Leshem, Amir
    IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 1402 - 1406
  • [19] Communication Efficient Distributed Learning with Feature Partitioned Data
    Zhang, Bingwen
    Geng, Jun
    Xu, Weiyu
    Lai, Lifeng
    2018 52ND ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS (CISS), 2018,
  • [20] More communication-efficient distributed sparse learning
    Zhou, Xingcai
    Yang, Guang
    INFORMATION SCIENCES, 2024, 668