Communication Efficient and Byzantine Tolerant Distributed Learning

被引:0
|
作者
Ghosh, Avishek [1 ]
Maity, Raj Kumar [2 ]
Kadhe, Swanand [1 ]
Mazumdar, Arya [2 ]
Ramachandran, Kannan [1 ]
机构
[1] Univ Calif Berkeley, Dept Elect Engn & Comp Sci, Berkeley, CA 94720 USA
[2] UMASS Amherst, Coll Informat & Comp Sci, Amherst, MA USA
关键词
D O I
10.1109/isit44484.2020.9174391
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
We develop a communication-efficient distributed learning algorithm that is robust against Byzantine worker machines. We propose and analyze a distributed gradient-descent algorithm that performs a simple thresholding based on gradient norms to mitigate Byzantine failures. We show the (statistical) error-rate of our algorithm matches that of Yin et al., 2018, which uses more complicated schemes (like coordinate-wise median or trimmed mean). Furthermore, for communication efficiency, we consider a generic class of delta-approximate compressors from Karimireddy et al., 2019, that encompasses sign-based compressors and top-k sparsification. Our algorithm uses compressed gradients and gradient norms for aggregation and Byzantine removal respectively. We establish the statistical error rate of the algorithm for arbitrary (convex or non-convex) smooth loss function. We show that, in certain regime of delta, the rate of convergence is not affected by the compression operation. We have experimentally validated our results and shown good performance in convergence for convex (least-square regression) and non-convex (neural network training) problems.
引用
收藏
页码:2545 / 2550
页数:6
相关论文
共 50 条
  • [21] More communication-efficient distributed sparse learning
    Zhou, Xingcai
    Yang, Guang
    Information Sciences, 2024, 668
  • [22] Communication Efficient Distributed Machine Learning with the Parameter Server
    Li, Mu
    Andersen, David G.
    Smola, Alexander
    Yu, Kai
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 27 (NIPS 2014), 2014, 27
  • [23] A communication efficient distributed learning framework for smart environments
    Valerio, Lorenzo
    Passarella, Andrea
    Conti, Marco
    PERVASIVE AND MOBILE COMPUTING, 2017, 41 : 46 - 68
  • [24] Communication Efficient Distributed Learning for Kernelized Contextual Bandits
    Li, Chuanhao
    Wang, Huazheng
    Wang, Mengdi
    Wang, Hongning
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [25] Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates
    Rammal, Ahmad
    Gruntkowska, Kaja
    Fedin, Nikita
    Gorbunov, Eduard
    Richtarik, Peter
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [26] Communication-Efficient and Byzantine-Robust Differentially Private Federated Learning
    Li, Min
    Xiao, Di
    Liang, Jia
    Huang, Hui
    IEEE COMMUNICATIONS LETTERS, 2022, 26 (08) : 1725 - 1729
  • [27] Genuinely distributed Byzantine machine learning
    El-Mhamdi, El-Mahdi
    Guerraoui, Rachid
    Guirguis, Arsany
    Hoang, Le-Nguyen
    Rouault, Sebastien
    DISTRIBUTED COMPUTING, 2022, 35 (04) : 305 - 331
  • [28] Genuinely distributed Byzantine machine learning
    El-Mahdi El-Mhamdi
    Rachid Guerraoui
    Arsany Guirguis
    Lê-Nguyên Hoang
    Sébastien Rouault
    Distributed Computing, 2022, 35 : 305 - 331
  • [29] Efficient Middleware for Byzantine Fault Tolerant Database Replication
    Garcia, Rui
    Rodrigues, Rodrigo
    Preguica, Nuno
    EUROSYS 11: PROCEEDINGS OF THE EUROSYS 2011 CONFERENCE, 2011, : 107 - 121
  • [30] Azvasa:- Byzantine Fault Tolerant Distributed Commit with Proactive Recovery
    Mahajan, Sahil
    Singhal, Rahul
    2009 SECOND INTERNATIONAL CONFERENCE ON EMERGING TRENDS IN ENGINEERING AND TECHNOLOGY (ICETET 2009), 2009, : 722 - 726