A DOUBLE INCREMENTAL AGGREGATED GRADIENT METHOD WITH LINEAR CONVERGENCE RATE FOR LARGE-SCALE OPTIMIZATION

被引:0
|
作者
Mokhtari, Aryan [1 ]
Gurbuzbalaban, Mert [2 ]
Ribeiro, Alejandro [1 ]
机构
[1] Univ Penn, Dept Elect & Syst Engn, Philadelphia, PA 19104 USA
[2] Rutgers State Univ, Dept Management Sci & Informat Syst, New Brunswick, NJ USA
关键词
Incremental methods; gradient descent; linear convergence rate;
D O I
暂无
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
This paper considers the problem of minimizing the average of a finite set of strongly convex functions. We introduce a double incremental aggregated gradient method (DIAG) that computes the gradient of only one function at each iteration, which is chosen based on a cyclic scheme, and uses the aggregated average gradient of all the functions to approximate the full gradient. We prove that not only the proposed DIAG method converges linearly to the optimal solution, but also its linear convergence factor justifies the advantage of incremental methods on full batch gradient descent. In particular, we show theoretically and empirically that one pass of DIAG is more efficient than one iteration of gradient descent.
引用
收藏
页码:4696 / 4700
页数:5
相关论文
共 50 条
  • [31] A gradient-based continuous method for large-scale optimization problems
    Liao, LZ
    Qi, LQ
    Tam, HW
    [J]. JOURNAL OF GLOBAL OPTIMIZATION, 2005, 31 (02) : 271 - 286
  • [32] Variable smoothing incremental aggregated gradient method for nonsmooth nonconvex regularized optimization
    Yuncheng Liu
    Fuquan Xia
    [J]. Optimization Letters, 2021, 15 : 2147 - 2164
  • [33] Incremental cooperative coevolution for large-scale global optimization
    Mahdavi, Sedigheh
    Rahnamayan, Shahryar
    Shiri, Mohammad Ebrahim
    [J]. SOFT COMPUTING, 2018, 22 (06) : 2045 - 2064
  • [34] Incremental cooperative coevolution for large-scale global optimization
    Sedigheh Mahdavi
    Shahryar Rahnamayan
    Mohammad Ebrahim Shiri
    [J]. Soft Computing, 2018, 22 : 2045 - 2064
  • [35] Variable smoothing incremental aggregated gradient method for nonsmooth nonconvex regularized optimization
    Liu, Yuncheng
    Xia, Fuquan
    [J]. OPTIMIZATION LETTERS, 2021, 15 (06) : 2147 - 2164
  • [36] CONVERGENCE RATE OF INCREMENTAL GRADIENT AND INCREMENTAL NEWTON METHODS
    Gurbuzbalaban, M.
    Ozdaglar, A.
    Parrilo, P. A.
    [J]. SIAM JOURNAL ON OPTIMIZATION, 2019, 29 (04) : 2542 - 2565
  • [37] On the linear convergence rate of Riemannian proximal gradient method
    Choi, Woocheol
    Chun, Changbum
    Jung, Yoon Mo
    Yun, Sangwoon
    [J]. OPTIMIZATION LETTERS, 2024,
  • [38] A DELAYED PROXIMAL GRADIENT METHOD WITH LINEAR CONVERGENCE RATE
    Feyzmahdavian, Hamid Reza
    Aytekin, Arda
    Johansson, Mikael
    [J]. 2014 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2014,
  • [40] Global convergence of QPFTH method for large-scale nonlinear sparse constrained optimization
    Ni Qin
    [J]. Acta Mathematicae Applicatae Sinica, 1998, 14 (3) : 271 - 283