SUCAG: Stochastic Unbiased Curvature-aided Gradient Method for Distributed Optimization

被引:0
|
作者
Wai, Hoi-To [1 ]
Freris, Nikolaos M. [2 ,3 ]
Nedic, Angelia [1 ]
Scaglione, Anna [1 ]
机构
[1] Arizona State Univ, Sch Elect Comp & Energy Engn, Tempe, AZ 85281 USA
[2] New York Univ Abu Dhabi, Div Engn, Abu Dhabi, U Arab Emirates
[3] NYU, Tandon Sch Engn, Brooklyn, NY USA
关键词
Distributed optimization; Incremental methods; Asynchronous algorithms; Randomized algorithms; Multi-agent systems; Machine learning; SUBGRADIENT METHODS; CLOCKS;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We propose and analyze a new stochastic gradient method, which we call Stochastic Unbiased Curvature-aided Gradient (SUCAG), for finite sum optimization problems. SUCAG constitutes an unbiased total gradient tracking technique that uses Hessian information to accelerate convergence. We analyze our method under the general asynchronous model of computation, in which each function is selected infinitely often with possibly unbounded (but sublinear) delay. For strongly convex problems, we establish linear convergence for the SUCAG method. When the initialization point is sufficiently close to the optimal solution, the established convergence rate is only dependent on the condition number of the problem, making it strictly faster than the known rate for the SAGA method. Furthermore, we describe a Markov-driven approach of implementing the SUCAG method in a distributed asynchronous multi-agent setting, via gossiping along a random walk on an undirected communication graph. We show that our analysis applies as long as the graph is connected and, notably, establishes an asymptotic linear convergence rate that is robust to the graph topology. Numerical results demonstrate the merits of our algorithm over existing methods.
引用
收藏
页码:1751 / 1756
页数:6
相关论文
共 50 条
  • [1] Curvature-aided Incremental Aggregated Gradient Method
    Wai, Hoi-To
    Shi, Wei
    Nedic, Angelia
    Scaglione, Anna
    2017 55TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2017, : 526 - 532
  • [2] Confidence region for distributed stochastic optimization problem via stochastic gradient tracking method
    Zhao, Shengchao
    Liu, Yongchao
    AUTOMATICA, 2024, 159
  • [3] A Distributed Stochastic Gradient Tracking Method
    Pu, Shi
    Nedic, Angelia
    2018 IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2018, : 963 - 968
  • [4] ON DISTRIBUTED STOCHASTIC GRADIENT ALGORITHMS FOR GLOBAL OPTIMIZATION
    Swenson, Brian
    Sridhar, Anirudh
    Poor, H. Vincent
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 8594 - 8598
  • [5] RANDOM GRADIENT EXTRAPOLATION FOR DISTRIBUTED AND STOCHASTIC OPTIMIZATION
    Lan, Guanghui
    Zhou, Yi
    SIAM JOURNAL ON OPTIMIZATION, 2018, 28 (04) : 2753 - 2782
  • [6] Constructing unbiased gradient estimators with finite variance for conditional stochastic optimization
    Goda, Takashi
    Kitade, Wataru
    MATHEMATICS AND COMPUTERS IN SIMULATION, 2023, 204 : 743 - 763
  • [7] Distributed Adaptive Gradient Algorithm With Gradient Tracking for Stochastic Nonconvex Optimization
    Han, Dongyu
    Liu, Kun
    Lin, Yeming
    Xia, Yuanqing
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2024, 69 (09) : 6333 - 6340
  • [8] Block Mirror Stochastic Gradient Method For Stochastic Optimization
    Jinda Yang
    Haiming Song
    Xinxin Li
    Di Hou
    Journal of Scientific Computing, 2023, 94
  • [9] Block Mirror Stochastic Gradient Method For Stochastic Optimization
    Yang, Jinda
    Song, Haiming
    Li, Xinxin
    Hou, Di
    JOURNAL OF SCIENTIFIC COMPUTING, 2023, 94 (03)
  • [10] Asynchronous Distributed Semi-Stochastic Gradient Optimization
    Zhang, Ruiliang
    Zheng, Shuai
    Kwok, James T.
    THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 2323 - 2329