Distributed Byzantine Tolerant Stochastic Gradient Descent in the Era of Big Data

被引:0
|
作者
Jin, Richeng [1 ]
He, Xiaofan [2 ]
Dai, Huaiyu [1 ]
机构
[1] North Carolina State Univ, Dept ECE, Raleigh, NC 27695 USA
[2] Wuhan Univ, Elect Informat Sch, Wuhan, Hubei, Peoples R China
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The recent advances in sensor technologies and smart devices enable the collaborative collection of a sheer volume of data from multiple information sources. As a promising tool to efficiently extract useful information from such big data, machine learning has been pushed to the forefront and seen great success in a wide range of relevant areas such as computer vision, health care, and financial market analysis. To accommodate the large volume of data, there is a surge of interest in the design of distributed machine learning, among which stochastic gradient descent (SGD) is one of the mostly adopted methods. Nonetheless, distributed machine learning methods may be vulnerable to Byzantine attack, in which the adversary can deliberately share falsified information to disrupt the intended machine learning procedures. In this work, two asynchronous Byzantine tolerant SGD algorithms are proposed, in which the honest collaborative workers are assumed to store the model parameters derived from their own local data and use them as the ground truth. The proposed algorithms can deal with an arbitrary number of Byzantine attackers and are provably convergent. Simulation results based on a real-world dataset are presented to verify the theoretical results and demonstrate the effectiveness of the proposed algorithms.
引用
下载
收藏
页数:6
相关论文
共 50 条
  • [11] A simplified convergence theory for Byzantine resilient stochastic gradient descent
    Roberts, Lindon
    Smyth, Edward
    EURO JOURNAL ON COMPUTATIONAL OPTIMIZATION, 2022, 10
  • [12] Aggregation rules based on stochastic gradient descent in Byzantine consensus
    Wang TianXiang
    Zheng ZhongLong
    Tang ChangBing
    Peng Hao
    PROCEEDINGS OF 2019 IEEE 8TH JOINT INTERNATIONAL INFORMATION TECHNOLOGY AND ARTIFICIAL INTELLIGENCE CONFERENCE (ITAIC 2019), 2019, : 317 - 324
  • [13] ROBUST DISTRIBUTED GRADIENT DESCENT WITH ARBITRARY NUMBER OF BYZANTINE ATTACKERS
    Cao, Xinyang
    Lai, Lifeng
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 6373 - 6377
  • [14] Predicting Throughput of Distributed Stochastic Gradient Descent
    Li, Zhuojin
    Paolieri, Marco
    Golubchik, Leana
    Lin, Sung-Han
    Yan, Wumo
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (11) : 2900 - 2912
  • [15] Distributed stochastic gradient descent with discriminative aggregating
    Chen, Zhen-Hong
    Lan, Yan-Yan
    Guo, Jia-Feng
    Cheng, Xue-Qi
    Jisuanji Xuebao/Chinese Journal of Computers, 2015, 38 (10): : 2054 - 2063
  • [16] On Byzantine-Resilient High-Dimensional Stochastic Gradient Descent
    Data, Deepesh
    Diggavi, Suhas
    2020 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2020, : 2628 - 2633
  • [17] Distributed Gradient Descent Algorithm Robust to an Arbitrary Number of Byzantine Attackers
    Cao, Xinyang
    Lai, Lifeng
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2019, 67 (22) : 5850 - 5864
  • [19] Convergence analysis of distributed stochastic gradient descent with shuffling
    Meng, Qi
    Chen, Wei
    Wang, Yue
    Ma, Zhi-Ming
    Liu, Tie-Yan
    NEUROCOMPUTING, 2019, 337 : 46 - 57
  • [20] Distributed and asynchronous Stochastic Gradient Descent with variance reduction
    Ming, Yuewei
    Zhao, Yawei
    Wu, Chengkun
    Li, Kuan
    Yin, Jianping
    NEUROCOMPUTING, 2018, 281 : 27 - 36