Communication Efficient and Byzantine Tolerant Distributed Learning

被引:0
|
作者
Ghosh, Avishek [1 ]
Maity, Raj Kumar [2 ]
Kadhe, Swanand [1 ]
Mazumdar, Arya [2 ]
Ramachandran, Kannan [1 ]
机构
[1] Univ Calif Berkeley, Dept Elect Engn & Comp Sci, Berkeley, CA 94720 USA
[2] UMASS Amherst, Coll Informat & Comp Sci, Amherst, MA USA
关键词
D O I
10.1109/isit44484.2020.9174391
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
We develop a communication-efficient distributed learning algorithm that is robust against Byzantine worker machines. We propose and analyze a distributed gradient-descent algorithm that performs a simple thresholding based on gradient norms to mitigate Byzantine failures. We show the (statistical) error-rate of our algorithm matches that of Yin et al., 2018, which uses more complicated schemes (like coordinate-wise median or trimmed mean). Furthermore, for communication efficiency, we consider a generic class of delta-approximate compressors from Karimireddy et al., 2019, that encompasses sign-based compressors and top-k sparsification. Our algorithm uses compressed gradients and gradient norms for aggregation and Byzantine removal respectively. We establish the statistical error rate of the algorithm for arbitrary (convex or non-convex) smooth loss function. We show that, in certain regime of delta, the rate of convergence is not affected by the compression operation. We have experimentally validated our results and shown good performance in convergence for convex (least-square regression) and non-convex (neural network training) problems.
引用
收藏
页码:2545 / 2550
页数:6
相关论文
共 50 条
  • [1] Communication-Efficient and Byzantine-Robust Distributed Learning
    Ghosh, Avishek
    Maity, Raj Kumar
    Kadhe, Swanand
    Mazumdar, Arya
    Ramchandran, Kannan
    2020 INFORMATION THEORY AND APPLICATIONS WORKSHOP (ITA), 2020,
  • [2] Communication-Efficient and Byzantine-Robust Distributed Learning with Error Feedback
    Ghosh A.
    Maity R.K.
    Kadhe S.
    Mazumdar A.
    Ramchandran K.
    IEEE Journal on Selected Areas in Information Theory, 2021, 2 (03): : 942 - 953
  • [3] Communication-efficient and Byzantine-robust distributed learning with statistical guarantee
    Zhou, Xingcai
    Chang, Le
    Xu, Pengfei
    Lv, Shaogao
    PATTERN RECOGNITION, 2023, 137
  • [4] Communication-Efficient and Byzantine-Robust Distributed Stochastic Learning with Arbitrary Number of Corrupted Workers
    Jian Xu
    Tong, Xinyi
    Huang, Shao-Lun
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 5415 - 5420
  • [5] An Efficient Byzantine Fault Tolerant Agreement
    Saini, Poonam
    Singh, Awadhesh Kumar
    INTERNATIONAL CONFERENCE ON METHODS AND MODELS IN SCIENCE AND TECHNOLOGY (ICM2ST-10), 2010, 1324 : 162 - 165
  • [6] Pando: Efficient Byzantine-Tolerant Distributed Sensor Fusion using Forest Ensembles
    Behrens, Hans Walter
    Candan, K. Selcuk
    ICC 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2020,
  • [7] Byzantine-Tolerant Distributed Coordinate Descent
    Data, Deepesh
    Diggavi, Suhas
    2019 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2019, : 2724 - 2728
  • [8] A Byzantine fault tolerant distributed commit protocol
    Zhao, Wenbing
    DASC 2007: THIRD IEEE INTERNATIONAL SYMPOSIUM ON DEPENDABLE, AUTONOMIC AND SECURE COMPUTING, PROCEEDINGS, 2007, : 37 - +
  • [9] LAGC: Lazily Aggregated Gradient Coding for Straggler-Tolerant and Communication-Efficient Distributed Learning
    Zhang, Jingjing
    Simeone, Osvaldo
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (03) : 962 - 974
  • [10] Byzantine Tolerant Algorithms for Federated Learning
    Xia, Qi
    Tao, Zeyi
    Li, Qun
    Chen, Songqing
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2023, 10 (06): : 3172 - 3183