Genuinely distributed Byzantine machine learning

被引:0
|
作者
El-Mahdi El-Mhamdi
Rachid Guerraoui
Arsany Guirguis
Lê-Nguyên Hoang
Sébastien Rouault
机构
[1] Ecole Polytechnique Fédérale de Lausanne (EPFL),School of Computer and Communication Sciences (IC)
来源
Distributed Computing | 2022年 / 35卷
关键词
Distributed machine learning; Robust machine learning; Byzantine fault tolerance; Byzantine parameter servers;
D O I
暂无
中图分类号
学科分类号
摘要
Machine learning (ML) solutions are nowadays distributed, according to the so-called server/worker architecture. One server holds the model parameters while several workers train the model. Clearly, such architecture is prone to various types of component failures, which can be all encompassed within the spectrum of a Byzantine behavior. Several approaches have been proposed recently to tolerate Byzantine workers. Yet all require trusting a central parameter server. We initiate in this paper the study of the “general” Byzantine-resilient distributed machine learning problem where no individual component is trusted. In particular, we distribute the parameter server computation on several nodes. We show that this problem can be solved in an asynchronous system, despite the presence of 13\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{1}{3}$$\end{document} Byzantine parameter servers (i.e., nps>3fps+1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n_{ps} > 3f_{ps}+1$$\end{document}) and 13\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\frac{1}{3}$$\end{document} Byzantine workers (i.e., nw>3fw\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n_w > 3f_w$$\end{document}), which is asymptotically optimal. We present a new algorithm, ByzSGD, which solves the general Byzantine-resilient distributed machine learning problem by relying on three major schemes. The first, scatter/gather, is a communication scheme whose goal is to bound the maximum drift among models on correct servers. The second, distributed median contraction (DMC), leverages the geometric properties of the median in high dimensional spaces to bring parameters within the correct servers back close to each other, ensuring safe and lively learning. The third, Minimum-diameter averaging (MDA), is a statistically-robust gradient aggregation rule whose goal is to tolerate Byzantine workers. MDA requires a loose bound on the variance of non-Byzantine gradient estimates, compared to existing alternatives [e.g., Krum (Blanchard et al., in: Neural information processing systems, pp 118-128, 2017)]. Interestingly, ByzSGD ensures Byzantine resilience without adding communication rounds (on a normal path), compared to vanilla non-Byzantine alternatives. ByzSGD requires, however, a larger number of messages which, we show, can be reduced if we assume synchrony. We implemented ByzSGD on top of both TensorFlow and PyTorch, and we report on our evaluation results. In particular, we show that ByzSGD guarantees convergence with around 32% overhead compared to vanilla SGD. Furthermore, we show that ByzSGD’s throughput overhead is 24–176% in the synchronous case and 28–220% in the asynchronous case.
引用
收藏
页码:305 / 331
页数:26
相关论文
共 50 条
  • [1] Genuinely distributed Byzantine machine learning
    El-Mhamdi, El-Mahdi
    Guerraoui, Rachid
    Guirguis, Arsany
    Hoang, Le-Nguyen
    Rouault, Sebastien
    DISTRIBUTED COMPUTING, 2022, 35 (04) : 305 - 331
  • [2] Byzantine fault tolerance in distributed machine learning: a survey
    Bouhata, Djamila
    Moumen, Hamouma
    Mazari, Jocelyn Ahmed
    Bounceur, Ahcene
    JOURNAL OF EXPERIMENTAL & THEORETICAL ARTIFICIAL INTELLIGENCE, 2024,
  • [3] Tolerating Adversarial Attacks and Byzantine Faults in Distributed Machine Learning
    Wu, Yusen
    Chen, Hao
    Wang, Xin
    Liu, Chao
    Nguyen, Phuong
    Yesha, Yelena
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 3380 - 3389
  • [5] SLC: A Permissioned Blockchain for Secure Distributed Machine Learning against Byzantine Attacks
    Liang, Lun
    Cao, Xianghui
    Zhang, Jun
    Sun, Changyin
    2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 7073 - 7078
  • [6] Byzantine Machine Learning: A Primer
    Guerraoui, Rachid
    Gupta, Nirupam
    Pinot, Rafael
    ACM COMPUTING SURVEYS, 2024, 56 (07)
  • [7] SafeML: A Privacy-Preserving Byzantine-Robust Framework for Distributed Machine Learning Training
    Mirabi, Meghdad
    Nikiel, Rene Klaus
    Binnig, Carsten
    2023 23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW 2023, 2023, : 207 - 216
  • [8] Byzantine-robust distributed support vector machine
    Xiaozhou Wang
    Weidong Liu
    Xiaojun Mao
    Science China(Mathematics), 2025, 68 (03) : 707 - 728
  • [9] Byzantine-robust distributed support vector machine
    Wang, Xiaozhou
    Liu, Weidong
    Mao, Xiaojun
    SCIENCE CHINA-MATHEMATICS, 2024, : 707 - 728
  • [10] Byzantine-Robust Distributed Learning With Compression
    Zhu, Heng
    Ling, Qing
    IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, 2023, 9 : 280 - 294