SUCAG: Stochastic Unbiased Curvature-aided Gradient Method for Distributed Optimization

被引:0
|
作者
Wai, Hoi-To [1 ]
Freris, Nikolaos M. [2 ,3 ]
Nedic, Angelia [1 ]
Scaglione, Anna [1 ]
机构
[1] Arizona State Univ, Sch Elect Comp & Energy Engn, Tempe, AZ 85281 USA
[2] New York Univ Abu Dhabi, Div Engn, Abu Dhabi, U Arab Emirates
[3] NYU, Tandon Sch Engn, Brooklyn, NY USA
关键词
Distributed optimization; Incremental methods; Asynchronous algorithms; Randomized algorithms; Multi-agent systems; Machine learning; SUBGRADIENT METHODS; CLOCKS;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We propose and analyze a new stochastic gradient method, which we call Stochastic Unbiased Curvature-aided Gradient (SUCAG), for finite sum optimization problems. SUCAG constitutes an unbiased total gradient tracking technique that uses Hessian information to accelerate convergence. We analyze our method under the general asynchronous model of computation, in which each function is selected infinitely often with possibly unbounded (but sublinear) delay. For strongly convex problems, we establish linear convergence for the SUCAG method. When the initialization point is sufficiently close to the optimal solution, the established convergence rate is only dependent on the condition number of the problem, making it strictly faster than the known rate for the SAGA method. Furthermore, we describe a Markov-driven approach of implementing the SUCAG method in a distributed asynchronous multi-agent setting, via gossiping along a random walk on an undirected communication graph. We show that our analysis applies as long as the graph is connected and, notably, establishes an asymptotic linear convergence rate that is robust to the graph topology. Numerical results demonstrate the merits of our algorithm over existing methods.
引用
收藏
页码:1751 / 1756
页数:6
相关论文
共 50 条
  • [41] Gradient-free method for nonsmooth distributed optimization
    Jueyou Li
    Changzhi Wu
    Zhiyou Wu
    Qiang Long
    Journal of Global Optimization, 2015, 61 : 325 - 340
  • [42] Spectral-like gradient method for distributed optimization
    Jakovetic, Dusan
    Krejic, Natasa
    Jerinkic, Natasa Krklec
    PROCEEDINGS OF 18TH INTERNATIONAL CONFERENCE ON SMART TECHNOLOGIES (IEEE EUROCON 2019), 2019,
  • [44] DIRECT METHOD FOR OPTIMIZATION OF STOCHASTIC DISTRIBUTED PARAMETER SYSTEMS
    AIDAROUS, SE
    INTERNATIONAL JOURNAL OF CONTROL, 1975, 21 (06) : 929 - 943
  • [45] A distributed stochastic optimization algorithm with gradient-tracking and distributed heavy-ball acceleration
    Sun, Bihao
    Hu, Jinhui
    Xia, Dawen
    Li, Huaqing
    FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING, 2021, 22 (11) : 1463 - 1476
  • [46] Inexact proximal stochastic gradient method for convex composite optimization
    Wang, Xiao
    Wang, Shuxiong
    Zhang, Hongchao
    COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 2017, 68 (03) : 579 - 618
  • [47] METHOD OF GRADIENT PROJECTION IN STOCHASTIC-SYSTEM OPTIMIZATION PROBLEMS
    BODNER, VA
    RODNISHCHEV, NE
    YURIKOV, EP
    AUTOMATION AND REMOTE CONTROL, 1978, 39 (09) : 1298 - 1303
  • [48] A Heuristic Adaptive Fast Gradient Method in Stochastic Optimization Problems
    Ogal'tsov, A. V.
    Tyurin, A. I.
    COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS, 2020, 60 (07) : 1108 - 1115
  • [49] A linearly convergent stochastic recursive gradient method for convex optimization
    Yan Liu
    Xiao Wang
    Tiande Guo
    Optimization Letters, 2020, 14 : 2265 - 2283
  • [50] Distributed Heterogeneous Multi-Agent Optimization with Stochastic Sub-Gradient
    Hu H.
    Mo L.
    Cao X.
    Journal of Systems Science and Complexity, 2024, 37 (04) : 1470 - 1487