Convergence of the BFGS method for LC(1) convex constrained optimization

被引:24
|
作者
Chen, XJ
机构
[1] School of Mathematics, University of New South Wales, Sydney
关键词
quasi-Newton methods; convex programming; nonsmooth equations;
D O I
10.1137/S0363012994274823
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper proposes a BFGS-SQP method for linearly constrained optimization where the objective function f is required only to have a Lipschitz gradient. The Karush-Kuhn-Tucker system of the problem is equivalent to a system of nonsmooth equations F(v) = 0. At every step a quasi-Newton matrix is updated if \\F(v(k))\\ satisfies a rule. This method converges globally, and the rate of convergence is superlinear when f is twice strongly differentiable at a solution of the optimization problem. No assumptions on the constraints are required. This generalizes the classical convergence theory of the BFGS method, which requires a twice continuous differentiability assumption on the objective function. Applications to stochastic programs with recourse on a CM5 parallel computer are discussed.
引用
收藏
页码:2051 / 2063
页数:13
相关论文
共 50 条