Performance analysis for a K-winners-take-all analog neural network:: Basic theory

被引:37
|
作者
Marinov, CA [1 ]
Calvert, BD
机构
[1] Polytechn Univ Bucharest, Dept Elect Engn, Bucharest 77206, Romania
[2] Univ Auckland, Dept Math, Auckland, New Zealand
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 2003年 / 14卷 / 04期
关键词
continuous-time Hopfield network; K winners take all; large gain behavior; processing time;
D O I
10.1109/TNN.2003.813833
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In a previous work, the Authors proposed, an analog Hopfield-type neural network that identified the K largest components of a list of real numbers. In this work, we identify computable restrictions on the parameters, in order that the network can repeatedly process lists, one after the other, at a given rate. A complete mathematical analysis gives analytical bounds for the time required in terms of circuit parameters, the length of the lists, and the relative separation of list elements. This allows practical setting of circuit parameters for required clocking times. The emphasis, is on high gain functioning of each neuron. Numerical investigations show the accuracy of the theoretical predictions, and study the influence of various parameters on performance.
引用
收藏
页码:766 / 780
页数:15
相关论文
共 50 条
  • [21] ON THE ROBUST DESIGN OF K-WINNERS-TAKE-ALL NETWORKS
    PERFETTI, R
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-ANALOG AND DIGITAL SIGNAL PROCESSING, 1995, 42 (01): : 55 - 58
  • [22] A Discrete-Time Recurrent Neural Network with One Neuron for k-Winners-Take-All Operation
    Liu, Qingshan
    Cao, Jinde
    Liang, Jinling
    [J]. ADVANCES IN NEURAL NETWORKS - ISNN 2009, PT 1, PROCEEDINGS, 2009, 5551 : 272 - +
  • [23] K-WINNERS-TAKE-ALL CIRCUIT WITH O(N) COMPLEXITY
    URAHAMA, K
    NAGAO, T
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 1995, 6 (03): : 776 - 778
  • [24] A Continuous-Time Model of Analogue K-Winners-Take-All Neural Circuit
    Tymoshchuk, Pavlo V.
    [J]. ENGINEERING APPLICATIONS OF NEURAL NETWORKS, 2012, 311 : 94 - 103
  • [25] Initialization-Based k-Winners-Take-All Neural Network Model Using Modified Gradient Descent
    Zhang, Yinyan
    Li, Shuai
    Geng, Guanggang
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (08) : 4130 - 4138
  • [26] An Improved Dual Neural Network for Solving a Class of Quadratic Programming Problems and Its k-Winners-Take-All Application
    Hu, Xiaolin
    Wang, Jun
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2008, 19 (12): : 2022 - 2031
  • [27] A New Recurrent Neural Network for Solving Convex Quadratic Programming Problems With an Application to the k-Winners-Take-All Problem
    Hu, Xiaolin
    Zhang, Bo
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2009, 20 (04): : 654 - 664
  • [28] Design of a K-Winners-Take-All Model With a Binary Spike Train
    Tymoshchuk, Pavlo, V
    Wunsch, Donald C., II
    [J]. IEEE TRANSACTIONS ON CYBERNETICS, 2019, 49 (08) : 3131 - 3140
  • [29] A CMOS K-winners-take-all circuit with O(N) complexity
    Sekerkiran, B
    Cilingiroglu, U
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-ANALOG AND DIGITAL SIGNAL PROCESSING, 1999, 46 (01): : 1 - 5
  • [30] Two k-winners-take-all networks with discontinuous activation functions
    Liu, Qingshan
    Wang, Jun
    [J]. NEURAL NETWORKS, 2008, 21 (2-3) : 406 - 413