The No-Prop algorithm: A new learning algorithm for multilayer neural networks

被引:110
|
作者
Widrow, Bernard [1 ]
Greenblatt, Aaron [1 ]
Kim, Youngsik [1 ]
Park, Dookun [1 ]
机构
[1] Stanford Univ, Dept Elect Engn, ISL, Stanford, CA 94305 USA
关键词
Neural networks; Training algorithm; Backpropagation;
D O I
10.1016/j.neunet.2012.09.020
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A new learning algorithm for multilayer neural networks that we have named No-Propagation (No-Prop) is hereby introduced. With this algorithm, the weights of the hidden-layer neurons are set and fixed with random values. Only the weights of the output-layer neurons are trained, using steepest descent to minimize mean square error, with the LMS algorithm of Widrow and Hoff. The purpose of introducing nonlinearity with the hidden layers is examined from the point of view of Least Mean Square Error Capacity (LMS Capacity), which is defined as the maximum number of distinct patterns that can be trained into the network with zero error. This is shown to be equal to the number of weights of each of the output-layer neurons. The No-Prop algorithm and the Back-Prop algorithm are compared. Our experience with No-Prop is limited, but from the several examples presented here, it seems that the performance regarding training and generalization of both algorithms is essentially the same when the number of training patterns is less than or equal to LMS Capacity. When the number of training patterns exceeds Capacity, Back-Prop is generally the better performer. But equivalent performance can be obtained with No-Prop by increasing the network Capacity by increasing the number of neurons in the hidden layer that drives the output layer. The No-Prop algorithm is much simpler and easier to implement than Back-Prop. Also, it converges much faster. It is too early to definitively say where to use one or the other of these algorithms. This is still a work in progress. (C) 2012 Elsevier Ltd. All rights reserved.
引用
收藏
页码:180 / 186
页数:7
相关论文
共 50 条
  • [1] Comments on the "No-Prop" algorithm
    Lim, Meng-Hiot
    [J]. NEURAL NETWORKS, 2013, 48 : 59 - 60
  • [2] Reply to the Comments on the "No-Prop" algorithm
    Widrow, Bernard
    [J]. NEURAL NETWORKS, 2013, 48 : 204 - 204
  • [3] An algorithm of supervised learning for multilayer neural networks
    Tang, Z
    Wang, XG
    Tamura, H
    Ishii, M
    [J]. NEURAL COMPUTATION, 2003, 15 (05) : 1125 - 1142
  • [4] New learning algorithm for neural networks
    Wang, Xuefeng
    Feng, Yingjun
    [J]. Harbin Gongye Daxue Xuebao/Journal of Harbin Institute of Technology, 1997, 29 (02): : 23 - 25
  • [5] GEOMETRICAL LEARNING ALGORITHM FOR MULTILAYER NEURAL NETWORKS IN A BINARY FIELD
    PARK, SK
    KIM, JH
    [J]. IEEE TRANSACTIONS ON COMPUTERS, 1993, 42 (08) : 988 - 992
  • [6] A fast Heuristic Global Learning algorithm for multilayer neural networks
    Cho, SY
    Chow, TWS
    [J]. NEURAL PROCESSING LETTERS, 1999, 9 (02) : 177 - 187
  • [7] A Fast Heuristic Global Learning Algorithm for Multilayer Neural Networks
    Siu-yeung Cho
    Tommy W.S. Chow
    [J]. Neural Processing Letters, 1999, 9 : 177 - 187
  • [8] Review of Pseudoinverse Learning Algorithm for Multilayer Neural Networks and Applications
    Wang, Jue
    Guo, Ping
    Xin, Xin
    [J]. ADVANCES IN NEURAL NETWORKS - ISNN 2018, 2018, 10878 : 99 - 106
  • [9] Fast Heuristic Global Learning algorithm for multilayer neural networks
    Department of Electronic Engineering, City University of Hong Kong, Hong Kong, Hong Kong
    [J]. Neural Process Letters, 2 (177-187):
  • [10] A Pascal program for the Kalman learning algorithm in multilayer neural networks
    Huang, L
    [J]. COMPUTERS & GEOSCIENCES, 1997, 23 (08) : 909 - 913