A New Recurrent Neural Network for Solving Convex Quadratic Programming Problems With an Application to the k-Winners-Take-All Problem

被引:51
|
作者
Hu, Xiaolin [1 ,2 ]
Zhang, Bo [1 ,2 ]
机构
[1] Tsinghua Univ, State Key Lab Intelligent Technol & Syst, TNList, Beijing 100084, Peoples R China
[2] Tsinghua Univ, Dept Comp Sci & Technol, Beijing 100084, Peoples R China
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 2009年 / 20卷 / 04期
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Asymptotic stability; k-winners-take-all (k-WTA); linear programming; neural network; quadratic programming; LINEAR VARIATIONAL-INEQUALITIES; OPTIMIZATION PROBLEMS; O(N) COMPLEXITY; CONSTRAINTS; CIRCUIT; CONVERGENCE; EQUATIONS; DESIGN; KWTA;
D O I
10.1109/TNN.2008.2011266
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, a new recurrent neural network is proposed for solving convex quadratic programming (QP) problems. Compared with existing neural networks, the proposed one features global convergence property under weak conditions, low structural complexity, and no calculation of matrix inverse. It serves as a competitive alternative in the neural network family for solving linear or quadratic programming problems. In addition, it is found that by some variable substitution, the proposed network turns out to be an existing model for solving minimax problems. In this sense, it can be also viewed as a special case of the minimax neural network. Based on this scheme, a k-winners-take-all (k-WTA) network with O (n) complexity is designed, which is characterized by simple structure, global convergence, and capability to deal with some ill cases. Numerical simulations are provided to validate the theoretical results obtained. More importantly, the network design method proposed in this paper has great potential to inspire other competitive inventions along the same line.
引用
收藏
页码:654 / 664
页数:11
相关论文
共 50 条
  • [1] An Improved Dual Neural Network for Solving a Class of Quadratic Programming Problems and Its k-Winners-Take-All Application
    Hu, Xiaolin
    Wang, Jun
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2008, 19 (12): : 2022 - 2031
  • [2] A new K-Winners-Take-All neural network
    Liu, SB
    Wang, J
    [J]. PROCEEDINGS OF THE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), VOLS 1-5, 2005, : 712 - 716
  • [3] A Recurrent Neural Network with a Tunable Activation Function for Solving K-Winners-Take-All
    Miao Peng
    Shen Yanjun
    Hou Jianshu
    Shen Yi
    [J]. 2014 33RD CHINESE CONTROL CONFERENCE (CCC), 2014, : 4957 - 4962
  • [4] A class of finite-time dual neural networks for solving quadratic programming problems and its k-winners-take-all application
    Li, Shuai
    Li, Yangming
    Wang, Zheng
    [J]. NEURAL NETWORKS, 2013, 39 : 27 - 39
  • [5] A K-Winners-Take-All neural network based on linear programming formulation
    Gu, Shenshen
    Wang, Jun
    [J]. 2007 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-6, 2007, : 37 - 40
  • [6] The K-Winners-Take-All Neural Network of Classification
    Brenych, Yana
    [J]. 2013 12TH INTERNATIONAL CONFERENCE ON THE EXPERIENCE OF DESIGNING AND APPLICATION OF CAD SYSTEMS IN MICROELECTRONICS (CADSM 2013), 2013, : 43 - +
  • [7] Another K-winners-take-all analog neural network
    Calvert, BD
    Marinov, CA
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2000, 11 (04): : 829 - 838
  • [8] A new k-winners-take-all neural network and its array architecture
    Yen, JC
    Guo, JI
    Chen, HC
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 1998, 9 (05): : 901 - 912
  • [9] A dynamic K-winners-take-all neural
    Yang, JF
    Chen, CM
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 1997, 27 (03): : 523 - 526
  • [10] A k-winners-take-all neural net
    Calvert, BD
    Marinov, CA
    [J]. PROCEEDINGS OF THE 39TH IEEE CONFERENCE ON DECISION AND CONTROL, VOLS 1-5, 2000, : 3547 - 3549