Distributed Byzantine Tolerant Stochastic Gradient Descent in the Era of Big Data

被引:0
|
作者
Jin, Richeng [1 ]
He, Xiaofan [2 ]
Dai, Huaiyu [1 ]
机构
[1] North Carolina State Univ, Dept ECE, Raleigh, NC 27695 USA
[2] Wuhan Univ, Elect Informat Sch, Wuhan, Hubei, Peoples R China
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The recent advances in sensor technologies and smart devices enable the collaborative collection of a sheer volume of data from multiple information sources. As a promising tool to efficiently extract useful information from such big data, machine learning has been pushed to the forefront and seen great success in a wide range of relevant areas such as computer vision, health care, and financial market analysis. To accommodate the large volume of data, there is a surge of interest in the design of distributed machine learning, among which stochastic gradient descent (SGD) is one of the mostly adopted methods. Nonetheless, distributed machine learning methods may be vulnerable to Byzantine attack, in which the adversary can deliberately share falsified information to disrupt the intended machine learning procedures. In this work, two asynchronous Byzantine tolerant SGD algorithms are proposed, in which the honest collaborative workers are assumed to store the model parameters derived from their own local data and use them as the ground truth. The proposed algorithms can deal with an arbitrary number of Byzantine attackers and are provably convergent. Simulation results based on a real-world dataset are presented to verify the theoretical results and demonstrate the effectiveness of the proposed algorithms.
引用
下载
收藏
页数:6
相关论文
共 50 条
  • [1] Byzantine Fault Tolerant Distributed Stochastic Gradient Descent Based on Over-the-Air Computation
    Park, Sangjun
    Choi, Wan
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2022, 70 (05) : 3204 - 3219
  • [2] Byzantine Stochastic Gradient Descent
    Alistarh, Dan
    Allen-Zhu, Zeyuan
    Li, Jerry
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [3] Byzantine Fault-Tolerant Parallelized Stochastic Gradient Descent for Linear Regression
    Gupta, Nirupam
    Vaidya, Nitin H.
    2019 57TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2019, : 415 - 420
  • [4] Data Encoding for Byzantine-Resilient Distributed Gradient Descent
    Data, Deepesh
    Song, Linqi
    Diggavi, Suhas
    2018 56TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2018, : 863 - 870
  • [5] Byzantine-Tolerant Distributed Coordinate Descent
    Data, Deepesh
    Diggavi, Suhas
    2019 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2019, : 2724 - 2728
  • [6] Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent
    Blanchard, Peva
    El Mhamdi, El Mandi
    Guerraoui, Rachid
    Stainer, Julien
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [7] Byzantine-Resilient Decentralized Stochastic Gradient Descent
    Guo, Shangwei
    Zhang, Tianwei
    Yu, Han
    Xie, Xiaofei
    Ma, Lei
    Xiang, Tao
    Liu, Yang
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (06) : 4096 - 4106
  • [8] Bayesian Distributed Stochastic Gradient Descent
    Teng, Michael
    Wood, Frank
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [9] RECENT TRENDS IN STOCHASTIC GRADIENT DESCENT FOR MACHINE LEARNING AND BIG DATA
    Newton, David
    Pasupathy, Raghu
    Yousefian, Farzad
    2018 WINTER SIMULATION CONFERENCE (WSC), 2018, : 366 - 380
  • [10] BYZANTINE-ROBUST STOCHASTIC GRADIENT DESCENT FOR DISTRIBUTED LOW-RANK MATRIX COMPLETION
    He, Xuechao
    Ling, Qing
    Chen, Tianyi
    2019 IEEE DATA SCIENCE WORKSHOP (DSW), 2019, : 322 - 326