CONVERGENCE RATE OF INCREMENTAL GRADIENT AND INCREMENTAL NEWTON METHODS

被引:15
|
作者
Gurbuzbalaban, M. [1 ]
Ozdaglar, A. [1 ]
Parrilo, P. A. [1 ]
机构
[1] MIT, Informat & Decis Syst Lab, Cambridge, MA 02139 USA
基金
美国国家科学基金会;
关键词
convex optimization; incremental algorithms; first-order methods; convergence rate; STOCHASTIC-APPROXIMATION; SUBGRADIENT METHODS; ALGORITHMS;
D O I
10.1137/17M1147846
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
The incremental gradient (IG) method is a prominent algorithm for minimizing a finite sum of smooth convex functions and is used in many contexts including large-scale data processing applications and distributed optimization over networks. It is a first-order method that processes the functions one at a time based on their gradient information. The incremental Newton method, on the other hand, is a second-order variant which additionally exploits the curvature information of the underlying functions and can therefore be faster. In this paper, we focus on the case when the objective function is strongly convex and present new convergence rate estimates for the incremental gradient and incremental Newton methods under constant and diminishing step sizes. For a decaying step-size rule alpha(k) = R/k(s) with s is an element of (0, 1] and R > 0, we show that the distance of the IG iterates to the optimal solution converges at a rate O(1/k(s)) (which translates into a O(1/k(2s)) rate in the suboptimality of the objective value). For s > 1/2, this improves the previous O(1/root k) results in distances obtained for the case when functions are nonsmooth under the additional assumption that the functions are smooth. We show that to achieve the fastest O(1/k) rate with a step size alpha(k) = R/k, IG needs a step-size parameter R to be a function of the strong convexity constant whereas the incremental Newton method does not. The results are based on viewing the IG method as a gradient descent method with gradient errors, developing upper bounds for the gradient error to derive inequalities that relate distances of the consecutive iterates to the optimal solution and finally applying Chung's lemmas from the stochastic approximation literature to these inequalities to determine their asymptotic behavior. In addition, we construct examples to show tightness of our rate results in terms of their dependency in k.
引用
收藏
页码:2542 / 2565
页数:24
相关论文
共 50 条
  • [1] GLOBAL CONVERGENCE RATE OF PROXIMAL INCREMENTAL AGGREGATED GRADIENT METHODS
    Vanli, N. D.
    Gurbuzbalaban, M.
    Ozdaglar, A.
    [J]. SIAM JOURNAL ON OPTIMIZATION, 2018, 28 (02) : 1282 - 1300
  • [2] Global Convergence Rate of Incremental Aggregated Gradient Methods for Nonsmooth Problems
    Vanli, N. Denizcan
    Gurbuzbalaban, Mert
    Ozdaglar, Asuman
    [J]. 2016 IEEE 55TH CONFERENCE ON DECISION AND CONTROL (CDC), 2016, : 173 - 178
  • [3] ON THE CONVERGENCE RATE OF INCREMENTAL AGGREGATED GRADIENT ALGORITHMS
    Gurbuzbalaban, M.
    Ozdaglar, A.
    Parrilo, P. A.
    [J]. SIAM JOURNAL ON OPTIMIZATION, 2017, 27 (02) : 1035 - 1048
  • [4] AN INCREMENTAL QUASI-NEWTON METHOD WITH A LOCAL SUPERLINEAR CONVERGENCE RATE
    Mokhtari, Aryan
    Eisen, Mark
    Ribeiro, Alejandro
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2017, : 4039 - 4043
  • [5] Incremental Quasi-Newton Methods with Faster Superlinear Convergence Rates
    Liu, Zhuanghua
    Luo, Luo
    Low, Bryan Kian Hsiang
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 13, 2024, : 14097 - 14105
  • [6] IQN: AN INCREMENTAL QUASI-NEWTON METHOD WITH LOCAL SUPERLINEAR CONVERGENCE RATE
    Mokhtari, Aryan
    Eisen, Mark
    Ribeiro, Alejandro
    [J]. SIAM JOURNAL ON OPTIMIZATION, 2018, 28 (02) : 1670 - 1698
  • [7] Convergence rate of incremental subgradient algorithms
    Nedic, A
    Bertsekas, D
    [J]. STOCHASTIC OPTIMIZATION: ALGORITHMS AND APPLICATIONS, 2001, 54 : 223 - 264
  • [8] Understanding Gradient Clipping In Incremental Gradient Methods
    Qian, Jiang
    Wu, Yuren
    Zhuang, Bojin
    Wang, Shaojun
    Xiao, Jing
    [J]. 24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130
  • [9] SURPASSING GRADIENT DESCENT PROVABLY: A CYCLIC INCREMENTAL METHOD WITH LINEAR CONVERGENCE RATE
    Mokhtari, Aryan
    Gurbuzbalaban, Mert
    Ribeiro, Alejandro
    [J]. SIAM JOURNAL ON OPTIMIZATION, 2018, 28 (02) : 1420 - 1447
  • [10] On the convergence of a Block-Coordinate Incremental Gradient method
    Palagi, Laura
    Seccia, Ruggiero
    [J]. SOFT COMPUTING, 2021, 25 (19) : 12615 - 12626