Accelerated Sparse Recovery via Gradient Descent with Nonlinear Conjugate Gradient Momentum

被引:0
|
作者
Hu, Mengqi [1 ]
Lou, Yifei [2 ]
Wang, Bao [3 ]
Yan, Ming [4 ,5 ,6 ]
Yang, Xiu [1 ]
Ye, Qiang [7 ]
机构
[1] Lehigh Univ, Dept Ind & Syst Engn, 200 West Packer Ave, Bethlehem, PA 18015 USA
[2] Univ Texas Dallas, Dept Math Sci, 800 W Campbell Rd, Richardson, TX 75080 USA
[3] Univ Utah, Imaging Inst, Dept Math & Sci Comp, 72 Cent Campus Dr, Salt Lake City, UT 84102 USA
[4] Chinese Univ Hong Kong, Sch Data Sci, Shenzhen 2001 Longxiang Blvd, Shenzhen, Guangdong, Peoples R China
[5] Michigan State Univ, Dept Computat Math Sci & Engn, 428 South Shaw Lane, E Lansing, MI 48824 USA
[6] Michigan State Univ, Dept Math, 428 South Shaw Lane, E Lansing, MI 48824 USA
[7] Univ Kentucky, Dept Math, Lexington, KY 40513 USA
关键词
Accelerated gradient momentum; Operator splitting; Fixed step size; Convergence rate; GLOBAL CONVERGENCE; THRESHOLDING ALGORITHM; MINIMIZATION; L(1); REPRESENTATION; DIFFERENCE; SELECTION; RESTART;
D O I
10.1007/s10915-023-02148-y
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
This paper applies an idea of adaptive momentum for the nonlinear conjugate gradient to accelerate optimization problems in sparse recovery. Specifically, we consider two types of minimization problems: a (single) differentiable function and the sum of a non-smooth function and a differentiable function. In the first case, we adopt a fixed step size to avoid the traditional line search and establish the convergence analysis of the proposed algorithm for a quadratic problem. This acceleration is further incorporated with an operator splitting technique to deal with the non-smooth function in the second case. We use the convex l(1) and the nonconvex l(1) - l(2) functionals as two case studies to demonstrate the efficiency of the proposed approaches over traditional methods.
引用
收藏
页数:21
相关论文
共 50 条
  • [1] Accelerated Sparse Recovery via Gradient Descent with Nonlinear Conjugate Gradient Momentum
    Mengqi Hu
    Yifei Lou
    Bao Wang
    Ming Yan
    Xiu Yang
    Qiang Ye
    [J]. Journal of Scientific Computing, 2023, 95
  • [2] An Accelerated Three-Term Extension of a Descent Nonlinear Conjugate Gradient Method
    Aminifard Z.
    Babaie-Kafaki S.
    Mirhoseini N.
    [J]. Asia-Pacific Journal of Operational Research, 2023, 40 (03)
  • [3] Momentum-weighted conjugate gradient descent algorithm for gradient coil optimization
    Lu, HB
    Jesmanowicz, A
    Li, SJ
    Hyde, JS
    [J]. MAGNETIC RESONANCE IN MEDICINE, 2004, 51 (01) : 158 - 164
  • [4] Exact Recovery of Multichannel Sparse Blind Deconvolution via Gradient Descent
    Qu, Qing
    Li, Xiao
    Zhu, Zhihui
    [J]. SIAM JOURNAL ON IMAGING SCIENCES, 2020, 13 (03): : 1630 - 1652
  • [5] A Class of Descent Nonlinear Conjugate Gradient Methods
    Ying, Tao
    [J]. 2013 FOURTH INTERNATIONAL CONFERENCE ON DIGITAL MANUFACTURING AND AUTOMATION (ICDMA), 2013, : 14 - 16
  • [6] Scheduled Restart Momentum for Accelerated Stochastic Gradient Descent
    Wang, Bao
    Nguyen, Tan
    Sun, Tao
    Bertozzi, Andrea L.
    Baraniuk, Richard G.
    Osher, Stanley J.
    [J]. SIAM JOURNAL ON IMAGING SCIENCES, 2022, 15 (02): : 738 - 761
  • [7] Global convergence of a descent nonlinear conjugate gradient method
    Li, Xiaoyong
    Liu, Hailin
    [J]. ICMS2010: PROCEEDINGS OF THE THIRD INTERNATIONAL CONFERENCE ON MODELLING AND SIMULATION, VOL 1: ENGINEERING COMPUTATION AND FINITE ELEMENT ANALYSIS, 2010, : 79 - 84
  • [8] Accelerating Federated Learning via Momentum Gradient Descent
    Liu, Wei
    Chen, Li
    Chen, Yunfei
    Zhang, Wenyi
    [J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2020, 31 (08) : 1754 - 1766
  • [9] Steepest descent with momentum for quadratic functions is a version of the conjugate gradient method
    Bhaya, A
    Kaszkurewicz, E
    [J]. NEURAL NETWORKS, 2004, 17 (01) : 65 - 71
  • [10] Sufficient descent nonlinear conjugate gradient methods with conjugacy condition
    Cheng, Wanyou
    Liu, Qunfeng
    [J]. NUMERICAL ALGORITHMS, 2010, 53 (01) : 113 - 131