Inexact proximal stochastic gradient method for convex composite optimization

被引:8
|
作者
Wang, Xiao [1 ]
Wang, Shuxiong [2 ]
Zhang, Hongchao [3 ]
机构
[1] Univ Chinese Acad Sci, Sch Math Sci, 19A Yuquan Rd, Beijing 100049, Peoples R China
[2] Chinese Acad Sci, Inst Computat Math & Sci Engn Comp, Acad Math & Syst Sci, Beijing 100190, Peoples R China
[3] Louisiana State Univ, Dept Math, 220 Lockett Hall, Baton Rouge, LA 70803 USA
基金
中国国家自然科学基金; 美国国家科学基金会;
关键词
Convex composite optimization; Empirical risk minimization; Stochastic gradient; Inexact methods; Global convergence; Complexity bound; APPROXIMATION ALGORITHMS; THRESHOLDING ALGORITHM;
D O I
10.1007/s10589-017-9932-7
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
We study an inexact proximal stochastic gradient (IPSG) method for convex composite optimization, whose objective function is a summation of an average of a large number of smooth convex functions and a convex, but possibly nonsmooth, function. Variance reduction techniques are incorporated in the method to reduce the stochastic gradient variance. The main feature of this IPSG algorithm is to allow solving the proximal subproblems inexactly while still keeping the global convergence with desirable complexity bounds. Different subproblem stopping criteria are proposed. Global convergence and the component gradient complexity bounds are derived for the both cases when the objective function is strongly convex or just generally convex. Preliminary numerical experiment shows the overall efficiency of the IPSG algorithm.
引用
收藏
页码:579 / 618
页数:40
相关论文
共 50 条
  • [41] Unified Analysis of Stochastic Gradient Methods for Composite Convex and Smooth Optimization
    Ahmed Khaled
    Othmane Sebbouh
    Nicolas Loizou
    Robert M. Gower
    Peter Richtárik
    [J]. Journal of Optimization Theory and Applications, 2023, 199 (2) : 499 - 540
  • [42] A linearly convergent stochastic recursive gradient method for convex optimization
    Yan Liu
    Xiao Wang
    Tiande Guo
    [J]. Optimization Letters, 2020, 14 : 2265 - 2283
  • [43] A linearly convergent stochastic recursive gradient method for convex optimization
    Liu, Yan
    Wang, Xiao
    Guo, Tiande
    [J]. OPTIMIZATION LETTERS, 2020, 14 (08) : 2265 - 2283
  • [44] Efficiency of Stochastic Coordinate Proximal Gradient Methods on Nonseparable Composite Optimization
    Necoara, Ion
    Chorobura, Flavia
    [J]. MATHEMATICS OF OPERATIONS RESEARCH, 2024,
  • [45] A Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex Optimization
    Li, Zhize
    Li, Jian
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [46] A proximal trust-region method for nonsmooth optimization with inexact function and gradient evaluations
    Robert J. Baraldi
    Drew P. Kouri
    [J]. Mathematical Programming, 2023, 201 : 559 - 598
  • [47] Incremental proximal gradient scheme with penalization for constrained composite convex optimization problems
    Petrot, Narin
    Nimana, Nimit
    [J]. OPTIMIZATION, 2021, 70 (5-6) : 1307 - 1336
  • [48] REDUCING THE COMPLEXITY OF TWO CLASSES OF OPTIMIZATION PROBLEMS BY INEXACT ACCELERATED PROXIMAL GRADIENT METHOD
    Lin, Qihang
    Xu, Yangyang
    [J]. SIAM JOURNAL ON OPTIMIZATION, 2023, 33 (01) : 1 - 35
  • [49] DISTRIBUTED PROXIMAL-GRADIENT METHOD FOR CONVEX OPTIMIZATION WITH INEQUALITY CONSTRAINTS
    Li, Jueyou
    Wu, Changzhi
    Wu, Zhiyou
    Long, Qiang
    Wang, Xiangyu
    [J]. ANZIAM JOURNAL, 2014, 56 (02): : 160 - 178
  • [50] A Modified Proximal Gradient Method for a Family of Nonsmooth Convex Optimization Problems
    Li Y.-Y.
    Zhang H.-B.
    Li F.
    [J]. Journal of the Operations Research Society of China, 2017, 5 (3) : 391 - 403