Newton-Type Alternating Minimization Algorithm for Convex Optimization

被引:5
|
作者
Stella, Lorenzo [1 ,2 ,3 ]
Themelis, Andreas [1 ,2 ,3 ]
Patrinos, Panagiotis [1 ,2 ]
机构
[1] Katholieke Univ Leuven, Dept Elect Engn, Stadius Ctr Dynam Syst Signal Proc & Data Analyt, B-3001 Leuven, Belgium
[2] Katholieke Univ Leuven, Optimizat Engn Ctr, B-3001 Leuven, Belgium
[3] IMT Sch Adv Studies Lucca, I-55100 Lucca, Italy
关键词
Convergence of numerical methods; iterative methods; optimal control; optimization; ERROR-BOUNDS; REGULARITY;
D O I
10.1109/TAC.2018.2872203
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We propose a Newton-type alternating minimization algorithm (NAMA) for solving structured nonsmooth convex optimization problems where the sum of two functions is to be minimized, one being strongly convex and the other composed with a linear mapping. The proposed algorithm is a line-search method over a continuous, real-valued, exact penalty function for the corresponding dual problem, which is computed by evaluating the augmented Lagrangian at the primal points obtained by alternating minimizations. As a consequence, NAMA relies on exactly the same computations as the classical alternating minimization algorithm (AMA), also known as the dual-proximal gradient method. Under standard assumptions, the proposed algorithm converges with global sublinear and local linear rates, while under mild additional assumptions, the asymptotic convergence is superlinear, provided that the search directions are chosen according to quasi-Newton formulas. Due to its simplicity, the proposed method is well suited for embedded applications and large-scale problems. Experiments show that using limited-memory directions in NAMA greatly improves the convergence speed over AMA and its accelerated variant.
引用
收藏
页码:697 / 711
页数:15
相关论文
共 50 条