REDUCING THE COMPLEXITY OF TWO CLASSES OF OPTIMIZATION PROBLEMS BY INEXACT ACCELERATED PROXIMAL GRADIENT METHOD

被引:1
|
作者
Lin, Qihang [1 ]
Xu, Yangyang [2 ]
机构
[1] Univ Iowa, Dept Business Analyt, Iowa City, IA 52242 USA
[2] Rensselaer Polytech Inst, Dept Math Sci, Troy, NY 12180 USA
关键词
  first-order method; constrained optimization; saddle-point nonsmooth optimiza-tion; 1ST-ORDER METHODS; CONVEX; ALGORITHM; REGRESSION; SHRINKAGE; SELECTION;
D O I
10.1137/22M1469584
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
We propose a double-loop inexact accelerated proximal gradient (APG) method for a strongly convex composite optimization problem with two smooth components of different smoothness constants and computational costs. Compared to APG, the inexact APG can reduce the time complexity for finding a near-stationary point when one smooth component has higher computational cost but a smaller smoothness constant than the other. The strongly convex com-posite optimization problem with this property arises from subproblems of a regularized augmented Lagrangian method for affine-constrained composite convex optimization and also from the smooth approximation for bilinear saddle-point structured nonsmooth convex optimization. We show that the inexact APG method can be applied to these two problems and reduce the time complexity for finding a near-stationary solution. Numerical experiments demonstrate significantly higher efficiency of our methods over an optimal primal-dual first-order method by Hamedani and Aybat [SIAM J. Optim., 31 (2021), pp. 1299--1329] and the gradient sliding method by Lan, Ouyang, and Zhou [arXiv2101.00143, 2021].
引用
收藏
页码:1 / 35
页数:35
相关论文
共 50 条
  • [1] An accelerated proximal gradient method for multiobjective optimization
    Hiroki Tanabe
    Ellen H. Fukuda
    Nobuo Yamashita
    [J]. Computational Optimization and Applications, 2023, 86 : 421 - 455
  • [2] An accelerated proximal gradient method for multiobjective optimization
    Tanabe, Hiroki
    Fukuda, Ellen H.
    Yamashita, Nobuo
    [J]. COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 2023, 86 (02) : 421 - 455
  • [3] Distributed and Inexact Proximal Gradient Method for Online Convex Optimization
    Bastianello, Nicola
    Dall'Anese, Emiliano
    [J]. 2021 EUROPEAN CONTROL CONFERENCE (ECC), 2021, : 2432 - 2437
  • [4] Inexact proximal stochastic gradient method for convex composite optimization
    Xiao Wang
    Shuxiong Wang
    Hongchao Zhang
    [J]. Computational Optimization and Applications, 2017, 68 : 579 - 618
  • [5] Inexact proximal stochastic gradient method for convex composite optimization
    Wang, Xiao
    Wang, Shuxiong
    Zhang, Hongchao
    [J]. COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 2017, 68 (03) : 579 - 618
  • [6] Accelerated inexact composite gradient methods for nonconvex spectral optimization problems
    Weiwei Kong
    Renato D. C. Monteiro
    [J]. Computational Optimization and Applications, 2022, 82 : 673 - 715
  • [7] Accelerated inexact composite gradient methods for nonconvex spectral optimization problems
    Kong, Weiwei
    Monteiro, Renato D. C.
    [J]. COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 2022, 82 (03) : 673 - 715
  • [8] An inexact Riemannian proximal gradient method
    Huang, Wen
    Wei, Ke
    [J]. COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 2023, 85 (01) : 1 - 32
  • [9] An inexact Riemannian proximal gradient method
    Wen Huang
    Ke Wei
    [J]. Computational Optimization and Applications, 2023, 85 : 1 - 32
  • [10] A note on the accelerated proximal gradient method for nonconvex optimization
    Wang, Huijuan
    Xu, Hong-Kun
    [J]. CARPATHIAN JOURNAL OF MATHEMATICS, 2018, 34 (03) : 449 - 457