The Convergence of Stochastic Gradient Descent in Asynchronous Shared Memory

被引:19
|
作者
Alistarh, Dan [1 ]
De Sa, Christopher [2 ]
Konstantinov, Nikola [1 ]
机构
[1] IST Austria, Klosterneuburg, Austria
[2] Cornell Univ, Ithaca, NY USA
基金
欧盟地平线“2020”;
关键词
Shared Memory; Stochastic Gradient Descent; Distributed Optimization; Lower Bounds;
D O I
10.1145/3212734.3212763
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Stochastic Gradient Descent (SGD) is a fundamental algorithm in machine learning, representing the optimization backbone for training several classic models, from regression to neural networks. Given the recent practical focus on distributed machine learning, significant work has been dedicated to the convergence properties of this algorithm under the inconsistent and noisy updates arising from execution in a distributed environment. However, surprisingly, the convergence properties of this classic algorithm in the standard shared-memory model are still not well-understood. In this work, we address this gap, and provide new convergence bounds for lock-free concurrent stochastic gradient descent, executing in the classic asynchronous shared memory model, against a strong adaptive adversary. Our results give improved upper and lower bounds on the "price of asynchrony" when executing the fundamental SGD algorithm in a concurrent setting. They show that this classic optimization tool can converge faster and with a wider range of parameters than previously known under asynchronous iterations. At the same time, we exhibit a fundamental trade-off between the maximum delay in the system and the rate at which SGD can converge, which governs the set of parameters under which this algorithm can still work efficiently.
引用
收藏
页码:169 / 177
页数:9
相关论文
共 50 条
  • [1] Decentralized Asynchronous Stochastic Gradient Descent: Convergence Rate Analysis
    Bedi, Amrit Singh
    Pradhan, Hrusikesha
    Rajawat, Ketan
    [J]. 2018 INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATIONS (SPCOM 2018), 2018, : 402 - 406
  • [2] A SHARP CONVERGENCE RATE FOR A MODEL EQUATION OF THE ASYNCHRONOUS STOCHASTIC GRADIENT DESCENT
    Zhu, Yuhua
    Ying, Lexing
    [J]. COMMUNICATIONS IN MATHEMATICAL SCIENCES, 2021, 19 (03) : 851 - 863
  • [3] High Performance Parallel Stochastic Gradient Descent in Shared Memory
    Sallinen, Scott
    Satish, Nadathur
    Smelyanskiy, Mikhail
    Sury, Samantika S.
    Re, Christopher
    [J]. 2016 IEEE 30TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM (IPDPS 2016), 2016, : 873 - 882
  • [4] Stochastic modified equations for the asynchronous stochastic gradient descent
    An, Jing
    Lu, Jianfeng
    Ying, Lexing
    [J]. INFORMATION AND INFERENCE-A JOURNAL OF THE IMA, 2020, 9 (04) : 851 - 873
  • [5] Convergence of Stochastic Gradient Descent for PCA
    Shamir, Ohad
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 48, 2016, 48
  • [6] Asynchronous Stochastic Gradient Descent with Delay Compensation
    Zheng, Shuxin
    Meng, Qi
    Wang, Taifeng
    Chen, Wei
    Yu, Nenghai
    Ma, Zhi-Ming
    Liu, Tie-Yan
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 70, 2017, 70
  • [7] ASYNCHRONOUS STOCHASTIC GRADIENT DESCENT FOR DNN TRAINING
    Zhang, Shanshan
    Zhang, Ce
    You, Zhao
    Zheng, Rong
    Xu, Bo
    [J]. 2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, : 6660 - 6663
  • [8] Practical Efficiency of Asynchronous Stochastic Gradient Descent
    Bhardwaj, Onkar
    Cong, Guojing
    [J]. PROCEEDINGS OF 2016 2ND WORKSHOP ON MACHINE LEARNING IN HPC ENVIRONMENTS (MLHPC), 2016, : 56 - 62
  • [9] Asynchronous Decentralized Parallel Stochastic Gradient Descent
    Lian, Xiangru
    Zhang, Wei
    Zhang, Ce
    Liu, Ji
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [10] Asynchronous Decentralized Accelerated Stochastic Gradient Descent
    Lan G.
    Zhou Y.
    [J]. Zhou, Yi (yi.zhou@ibm.com), 1600, Institute of Electrical and Electronics Engineers Inc. (02): : 802 - 811