An enhanced gradient-tracking bound for distributed online stochastic convex optimization

被引:0
|
作者
Alghunaim, Sulaiman A. [1 ]
Yuan, Kun [2 ]
机构
[1] Kuwait Univ, Dept Elect Engn, Kuwait, Kuwait
[2] Peking Univ, Ctr Machine Learning Res, Beijing, Peoples R China
关键词
Distributed stochastic optimization; Decentralized learning; Gradient-tracking; Adapt-then-combine; LINEAR CONVERGENCE; ALGORITHMS;
D O I
10.1016/j.sigpro.2023.109345
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Gradient-tracking (GT) based decentralized methods have emerged as an effective and viable alternative method to decentralized (stochastic) gradient descent (DSGD) when solving distributed online stochastic optimization problems. Initial studies of GT methods implied that GT methods have worse network dependent rate than DSGD, contradicting experimental results. This dilemma has recently been resolved, and tighter rates for GT methods have been established, which improves upon DSGD.In this work, we establish more enhanced rates for GT methods under the online stochastic convex settings. We present an alternative approach for analyzing GT methods for convex problems and over static graphs. When compared to previous analyses, this approach allows us to establish enhanced network dependent rates.
引用
收藏
页数:9
相关论文
共 50 条
  • [31] Distributed Personalized Gradient Tracking With Convex Parametric Models
    Notarnicola, Ivano
    Simonetto, Andrea
    Farina, Francesco
    Notarstefano, Giuseppe
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2023, 68 (01) : 588 - 595
  • [32] A Stochastic Newton Algorithm for Distributed Convex Optimization
    Bullins, Brian
    Patel, Kumar Kshitij
    Shamir, Ohad
    Srebro, Nathan
    Woodworth, Blake
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [33] Online Distributed Convex Optimization on Dynamic Networks
    Hosseini, Saghar
    Chapman, Airlie
    Mesbahi, Mehran
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2016, 61 (11) : 3545 - 3550
  • [34] Distributed Online Convex Optimization with Compressed Communication
    Tu, Zhipeng
    Wang, Xi
    Hong, Yiguang
    Wang, Lei
    Yuan, Deming
    Shi, Guodong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [35] Distributed Online Convex Optimization With an Aggregative Variable
    Li, Xiuxian
    Yi, Xinlei
    Xie, Lihua
    IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS, 2022, 9 (01): : 438 - 449
  • [36] Computational Convergence Analysis of Distributed Gradient Tracking for Smooth Convex Optimization Using Dissipativity Theory
    Han, Shuo
    2019 AMERICAN CONTROL CONFERENCE (ACC), 2019, : 4086 - 4091
  • [37] Online distributed stochastic learning algorithm for convex optimization in time-varying directed networks
    Li, Jueyou
    Gu, Chuanye
    Wu, Zhiyou
    NEUROCOMPUTING, 2020, 416 (416) : 85 - 94
  • [38] Stochastic intermediate gradient method for convex optimization problems
    A. V. Gasnikov
    P. E. Dvurechensky
    Doklady Mathematics, 2016, 93 : 148 - 151
  • [39] The Complexity of Making the Gradient Small in Stochastic Convex Optimization
    Foster, Dylan J.
    Sekhari, Ayush
    Shamir, Ohad
    Srebro, Nathan
    Sridharan, Karthik
    Woodworth, Blake
    CONFERENCE ON LEARNING THEORY, VOL 99, 2019, 99
  • [40] Stochastic intermediate gradient method for convex optimization problems
    Gasnikov, A. V.
    Dvurechensky, P. E.
    DOKLADY MATHEMATICS, 2016, 93 (02) : 148 - 151