On the linear convergence rate of Riemannian proximal gradient method

被引:0
|
作者
Choi, Woocheol [1 ]
Chun, Changbum [1 ]
Jung, Yoon Mo [1 ]
Yun, Sangwoon [2 ]
机构
[1] Sungkyunkwan Univ, Dept Math, Suwon 16419, South Korea
[2] Sungkyunkwan Univ, Dept Math Educ, Seoul 03063, South Korea
基金
新加坡国家研究基金会;
关键词
Riemannian proximal gradient method; linear convergence rate; strongly convex;
D O I
10.1007/s11590-024-02129-6
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
Composite optimization problems on Riemannian manifolds arise in applications such as sparse principal component analysis and dictionary learning. Recently, Huang and Wei introduced a Riemannian proximal gradient method (Huang and Wei in MP 194:371-413, 2022) and an inexact Riemannian proximal gradient method (Wen and Ke in COA 85:1-32, 2023), utilizing the retraction mapping to address these challenges. They established the sublinear convergence rate of the Riemannian proximal gradient method under the retraction convexity and a geometric condition on retractions, as well as the local linear convergence rate of the inexact Riemannian proximal gradient method under the Riemannian Kurdyka-Lojasiewicz property. In this paper, we demonstrate the linear convergence rate of the Riemannian proximal gradient method and the linear convergence rate of the proximal gradient method proposed in Chen et al. (SIAM J Opt 30:210-239, 2020) under strong retraction convexity. Additionally, we provide a counterexample that violates the geometric condition on retractions, which is crucial for establishing the sublinear convergence rate of the Riemannian proximal gradient method.
引用
收藏
页数:21
相关论文
共 50 条
  • [1] A DELAYED PROXIMAL GRADIENT METHOD WITH LINEAR CONVERGENCE RATE
    Feyzmahdavian, Hamid Reza
    Aytekin, Arda
    Johansson, Mikael
    [J]. 2014 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2014,
  • [2] An inexact Riemannian proximal gradient method
    Huang, Wen
    Wei, Ke
    [J]. COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 2023, 85 (01) : 1 - 32
  • [3] An inexact Riemannian proximal gradient method
    Wen Huang
    Ke Wei
    [J]. Computational Optimization and Applications, 2023, 85 : 1 - 32
  • [4] On the Linear Convergence Rate of the Distributed Block Proximal Method
    Farina, Francesco
    Notarstefano, Giuseppe
    [J]. IEEE CONTROL SYSTEMS LETTERS, 2020, 4 (03): : 779 - 784
  • [5] Nonconvex Proximal Incremental Aggregated Gradient Method with Linear Convergence
    Wei Peng
    Hui Zhang
    Xiaoya Zhang
    [J]. Journal of Optimization Theory and Applications, 2019, 183 : 230 - 245
  • [6] Nonconvex Proximal Incremental Aggregated Gradient Method with Linear Convergence
    Peng, Wei
    Zhang, Hui
    Zhang, Xiaoya
    [J]. JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 2019, 183 (01) : 230 - 245
  • [7] Inertial proximal incremental aggregated gradient method with linear convergence guarantees
    Zhang, Xiaoya
    Peng, Wei
    Zhang, Hui
    [J]. MATHEMATICAL METHODS OF OPERATIONS RESEARCH, 2022, 96 (02) : 187 - 213
  • [8] Inertial proximal incremental aggregated gradient method with linear convergence guarantees
    Xiaoya Zhang
    Wei Peng
    Hui Zhang
    [J]. Mathematical Methods of Operations Research, 2022, 96 : 187 - 213
  • [9] A conditional gradient method with linear rate of convergence for solving convex linear systems
    Amir Beck
    Marc Teboulle
    [J]. Mathematical Methods of Operations Research, 2004, 59 : 235 - 247
  • [10] A conditional gradient method with linear rate of convergence for solving convex linear systems
    Beck, A
    Teboulle, M
    [J]. MATHEMATICAL METHODS OF OPERATIONS RESEARCH, 2004, 59 (02) : 235 - 247