An Incremental Algorithm for Parallel Training of the Size and the Weights in a Feedforward Neural Network

被引:0
|
作者
Kateřina Hlaváčková-Schindler
Kateřina Hlaváčková-Schindler
Manfred M. Fischer
机构
[1] Austrian Academy of Sciences,Institute for Urban and Regional Research
[2] Academy of Sciences of the Czech Republic,Institute of Computer Science
[3] Wirtschaftsuniversität Wien,Department of Economic and Social Geography
来源
Neural Processing Letters | 2000年 / 11卷
关键词
approximation of a function; feedforward network; incremental algorithm; variation of a function with respect to a set; weight decay;
D O I
暂无
中图分类号
学科分类号
摘要
An algorithm of incremental approximation of functions in a normed linearspace by feedforward neural networks is presented. The concept of variationof a function with respect to a set is used to estimate the approximationerror together with the weight decay method, for optimizing the size andweights of a network in each iteration step of the algorithm. Two alternatives, recursively incremental and generally incremental, are proposed. In the generally incremental case, the algorithm optimizes parameters of all units in the hidden layer at each step. In the recursively incremental case, the algorithm optimizes the parameterscorresponding to only one unit in the hidden layer at each step. In thiscase, an optimization problem with a smaller number of parameters is beingsolved at each step.
引用
收藏
页码:131 / 138
页数:7
相关论文
共 50 条