A Sequential Subspace Quasi-Newton Method for Large-Scale Convex Optimization

被引:0
|
作者
Senov, Aleksandr [1 ,2 ,3 ]
Granichin, Oleg [1 ,2 ,3 ]
Granichina, Olga [4 ]
机构
[1] St Petersburg State Univ, Fac Math & Mech, Univ Sky Prospekt 28, St Petersburg 198504, Russia
[2] St Petersburg State Univ, Ctr Math Robot & Artificial Intelligence, Univ Sky Prospekt 28, St Petersburg 198504, Russia
[3] Russian Acad Sci, Inst Problems Mech Engn, Moscow 117901, Russia
[4] Herzen State Pedag Univ, St Petersburg, Russia
基金
俄罗斯科学基金会;
关键词
CONJUGATE-GRADIENT; CONVERGENCE; ALGORITHM;
D O I
10.23919/acc45564.2020.9147989
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Large-scale optimization plays important role in many control and learning problems. Sequential subspace optimization is a novel approach particularly suitable for large-scale optimization problems. It is based on sequential reduction of the initial optimization problem to optimization problems in a low-dimensional space. In this paper we consider a problem of multidimensional convex real-valued function optimization. In a framework of sequential subspace optimization we develop a new method based on a combination of quasi-Newton and conjugate gradient method steps. We provide its formal justification and derive several of its theoretical properties. In particular, for quadratic programming problem we prove linear convergence in a finite number of steps. We demonstrate superiority of the proposed algorithm over common state of the art methods by carrying out comparative analysis on both modelled and realworld optimization problems.
引用
收藏
页码:3627 / 3632
页数:6
相关论文
共 50 条
  • [41] Regularization of limited memory quasi-Newton methods for large-scale nonconvex minimization
    Kanzow, Christian
    Steck, Daniel
    [J]. MATHEMATICAL PROGRAMMING COMPUTATION, 2023, 15 (03) : 417 - 444
  • [42] Regularization of limited memory quasi-Newton methods for large-scale nonconvex minimization
    Christian Kanzow
    Daniel Steck
    [J]. Mathematical Programming Computation, 2023, 15 : 417 - 444
  • [43] Multifidelity Quasi-Newton Method for Design Optimization
    Bryson, Dean E.
    Rumpfkeil, Markus P.
    [J]. AIAA JOURNAL, 2018, 56 (10) : 4074 - 4086
  • [44] An Improved Quasi-Newton Method for Unconstrained Optimization
    Fei Pusheng
    Chen Zhong (Department of Mathematics
    [J]. Wuhan University Journal of Natural Sciences, 1996, (01) : 35 - 37
  • [45] Quasi-Newton's method for multiobjective optimization
    Povalej, Ziga
    [J]. JOURNAL OF COMPUTATIONAL AND APPLIED MATHEMATICS, 2014, 255 : 765 - 777
  • [46] A subspace implementation of quasi-Newton trust region methods for unconstrained optimization
    Zhou-Hong Wang
    Ya-Xiang Yuan
    [J]. Numerische Mathematik, 2006, 104 : 241 - 269
  • [47] A subspace implementation of quasi-Newton trust region methods for unconstrained optimization
    Wang, Zhou-Hong
    Yuan, Ya-Xiang
    [J]. NUMERISCHE MATHEMATIK, 2006, 104 (02) : 241 - 269
  • [48] A Quasi-Newton Penalty Barrier Method for Convex Minimization Problems
    Paul Armand
    [J]. Computational Optimization and Applications, 2003, 26 : 5 - 34
  • [49] A quasi-Newton penalty barrier method for convex minimization problems
    Armand, P
    [J]. COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 2003, 26 (01) : 5 - 34
  • [50] Asynchronous Stochastic Quasi-Newton MCMC for Non-Convex Optimization
    Simsekli, Umut
    Yildiz, Cagatay
    Thanh Huy Nguyen
    Richard, Gael
    Cemgil, A. Taylan
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80