On the Convergence of an Alternating Direction Penalty Method for Nonconvex Problems

被引:0
|
作者
Magnusson, S. [1 ]
Weeraddana, P. C. [1 ]
Rabbat, M. G. [2 ]
Fischione, C. [1 ]
机构
[1] KTH Royal Inst Technol, Dept Automat Control, Stockholm, Sweden
[2] McGill Univ, Dept Elect & Comp Engn, Montreal, PQ, Canada
关键词
Nonconvex Optimization; Distributed Optimization; ALGORITHM; CONSENSUS;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper investigates convergence properties of scalable algorithms for nonconvex and structured optimization. We consider a method that is adapted from the classic quadratic penalty function method, the Alternating Direction Penalty Method (ADPM). Unlike the original quadratic penalty function method, in which single-step optimizations are adopted, ADPM uses alternating optimization, which in turn is exploited to enable scalability of the algorithm. A special case of ADPM is a variant of the well known Alternating Direction Method of Multipliers (ADMM), where the penalty parameter is increased to infinity. We show that due to the increasing penalty, the ADPM asymptotically reaches a primal feasible point under mild conditions. Moreover, we give numerical evidence that demonstrates the potential of the ADPM for computing local optimal points when the penalty is not updated too aggressively.
引用
收藏
页码:793 / 797
页数:5
相关论文
共 50 条