A Gradient-Based Neural Network Method for Solving Strictly Convex Quadratic Programming Problems

被引:65
|
作者
Nazemi, Alireza [1 ]
Nazemi, Masoomeh [2 ]
机构
[1] Shahrood Univ, Sch Math Sci, Dept Math, Shahrood, Iran
[2] Islamic Azad Univ, Dept Food Sci & Technol, Damghan Branch, Damghan, Iran
关键词
Neural network; Convex quadratic programming; Fischer-Burmeister function; Convergent; Stability; CONSTRAINED OPTIMIZATION PROBLEMS; VARIATIONAL-INEQUALITIES; MODEL; EQUATIONS;
D O I
10.1007/s12559-014-9249-0
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we study a gradient-based neural network method for solving strictly convex quadratic programming (SCQP) problems. By converting the SCQP problem into a system of ordinary differential equation (ODE), we are able to show that the solution trajectory of this ODE tends to the set of stationary points of the original optimization problem. It is shown that the proposed neural network is stable in the sense of Lyapunov and can converge to an exact optimal solution of the original problem. It is also found that a larger scaling factor leads to a better convergence rate of the trajectory. The simulation results also show that the proposed neural network is feasible and efficient. The simulation results are very attractive.
引用
收藏
页码:484 / 495
页数:12
相关论文
共 50 条