Communication-Efficient and Byzantine-Robust Distributed Learning

被引:6
|
作者
Ghosh, Avishek [1 ]
Maity, Raj Kumar [2 ]
Kadhe, Swanand [1 ]
Mazumdar, Arya [2 ]
Ramchandran, Kannan [1 ]
机构
[1] Univ Calif Berkeley, Dept Elect Engn & Comp Sci, Berkeley, CA 94720 USA
[2] UMASS Amherst, Coll Informat & Comp Sci, Amherst, MA USA
基金
美国国家科学基金会;
关键词
D O I
10.1109/ita50056.2020.9245017
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We develop a communication-efficient distributed learning algorithm that is robust against Byzantine worker machines. We propose and analyze a distributed gradient-descent algorithm that performs a simple thresholding based on gradient norms to mitigate Byzantine failures. We show the (statistical) error-rate of our algorithm matches that of [YCKB18], which uses more complicated schemes (like coordinate-wise median or trimmed mean) and thus optimal. Furthermore, for communication efficiency, we consider a generic class of delta-approximate compressors from [KRSJ19] that encompasses sign-based compressors and top-k sparsification. Our algorithm uses compressed gradients and gradient norms for aggregation and Byzantine removal respectively. We establish the statistical error rate of the algorithm for arbitrary (convex or non-convex) smooth loss function. We show that, in the regime when the compression factor delta is constant and the dimension of the parameter space is fixed, the rate of convergence is not affected by the compression operation, and hence we effectively get the compression for free. Moreover, we extend the compressed gradient descent algorithm with error feedback proposed in [KRSJ19] for the distributed setting. We have experimentally validated our results and shown good performance in convergence for convex (least-square regression) and non-convex (neural network training) problems.
引用
收藏
页数:28
相关论文
共 50 条
  • [31] Robust communication-efficient decentralized learning with heterogeneity
    Zhang, Xiao
    Wang, Yangyang
    Chen, Shuzhen
    Wang, Cui
    Yu, Dongxiao
    Cheng, Xiuzhen
    JOURNAL OF SYSTEMS ARCHITECTURE, 2023, 141
  • [32] More communication-efficient distributed sparse learning
    Zhou, Xingcai
    Yang, Guang
    INFORMATION SCIENCES, 2024, 668
  • [33] More communication-efficient distributed sparse learning
    Zhou, Xingcai
    Yang, Guang
    Information Sciences, 2024, 668
  • [34] TrustDDL: A Privacy-Preserving Byzantine-Robust Distributed Deep Learning Framework
    Nikiel, Rene Klaus
    Mirabi, Meghdad
    Binnig, Carsten
    2024 54TH ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS WORKSHOPS, DSN-W 2024, 2024, : 55 - 62
  • [35] Byzantine-Robust Distributed Online Learning: Taming Adversarial Participants in An Adversarial Environment
    Dong, Xingrong
    Wu, Zhaoxian
    Ling, Qing
    Tian, Zhi
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2024, 72 : 235 - 248
  • [36] AFLGuard: Byzantine-robust Asynchronous Federated Learning
    Fang, Minghong
    Liu, Jia
    Gong, Neil Zhenqiang
    Bentley, Elizabeth S.
    PROCEEDINGS OF THE 38TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2022, 2022, : 632 - 646
  • [37] Efficient Byzantine-Robust and Privacy-Preserving Federated Learning on Compressive Domain
    Hu, Guiqiang
    Li, Hongwei
    Fan, Wenshu
    Zhang, Yushu
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (04): : 7116 - 7127
  • [38] Differentially Private Byzantine-Robust Federated Learning
    Ma, Xu
    Sun, Xiaoqian
    Wu, Yuduo
    Liu, Zheli
    Chen, Xiaofeng
    Dong, Changyu
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (12) : 3690 - 3701
  • [39] Communication Efficient and Byzantine Tolerant Distributed Learning
    Ghosh, Avishek
    Maity, Raj Kumar
    Kadhe, Swanand
    Mazumdar, Arya
    Ramachandran, Kannan
    2020 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2020, : 2545 - 2550
  • [40] SafeML: A Privacy-Preserving Byzantine-Robust Framework for Distributed Machine Learning Training
    Mirabi, Meghdad
    Nikiel, Rene Klaus
    Binnig, Carsten
    2023 23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW 2023, 2023, : 207 - 216