Multilayer neural networks:: an experimental evaluation of on-line training methods

被引:21
|
作者
Martí, R [1 ]
El-Fallahi, A [1 ]
机构
[1] Univ Valencia, Dept Estadist & Invest Operat, E-46100 Valencia, Spain
关键词
D O I
10.1016/S0305-0548(03)00104-7
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Artificial neural networks (ANN) are inspired by the structure of biological neural networks and their ability to integrate knowledge and learning. In ANN training, the objective is to minimize the error over the training set. The most popular method for training these networks is back propagation, a gradient descent technique. Other non-linear optimization methods such as conjugate directions set or conjugate gradient have also been used for this purpose. Recently, metaheuristics such as simulated annealing, genetic algorithms or tabu search have been also adapted to this context. There are situations in which the necessary training data are being generated in real time and an extensive training is not possible. This "on-line" training arises in the context of optimizing a simulation. This paper presents extensive computational experiments to compare 12 "on-line" training methods over a collection of 45 functions from the literature within a short-term horizon. We propose a new method based on the tabu search methodology, which can compete in quality with the best previous approaches. Scope and purpose Artificial neural networks present a new paradigm for decision support that integrates knowledge and learning. They are inspired by biological neural systems where the nodes of the network represent the neurons and the arcs, the axons and dendrites. In recent years, there has been an increasing interest in ANN since they had proven very effectively in different contexts. In this paper we will focus on the prediction/estimation problem for a given function, where the input of the net is given by the values of the function variables and the output is the estimation of the function image. Specifically, we will consider the optimization problem that arises when training the net in the context of optimizing simulations (i.e. when the training time is limited). As far as we know, partial studies have been published, where a few training methods are compared over a limited set of instances. In this paper we present extensive computational experimentation of 12 different optimization methods over a set of 45 well-known functions. (C) 2003 Elsevier Ltd. All rights reserved.
引用
收藏
页码:1491 / 1513
页数:23
相关论文
共 50 条
  • [11] On-line training of recurrent neural networks with continuous topology adaptation
    Obradovic, D
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 1996, 7 (01): : 222 - 228
  • [12] A non-convergent on-line training algorithm for neural networks
    Utans, J
    [J]. BIOLOGICAL AND ARTIFICIAL COMPUTATION: FROM NEUROSCIENCE TO TECHNOLOGY, 1997, 1240 : 913 - 921
  • [13] Evaluation of the simplex method for training simple multilayer neural networks
    Dornier, M
    Heyd, B
    Danzart, M
    [J]. NEURAL COMPUTING & APPLICATIONS, 1998, 7 (02): : 107 - 114
  • [14] Evaluation of the Simplex method for training simple multilayer neural networks
    M. Dornier
    B. Heyd
    M. Danzart
    [J]. Neural Computing & Applications, 1998, 7 : 107 - 114
  • [15] Multilayer dynamic neural networks for non-linear system on-line identification
    Yu, W
    Poznyak, AS
    Li, XO
    [J]. INTERNATIONAL JOURNAL OF CONTROL, 2001, 74 (18) : 1858 - 1864
  • [16] Multilayer neural networks training methodic
    Golovko, V
    Maniakov, N
    Makhnist, L
    [J]. IDAACS'2003: PROCEEDINGS OF THE SECOND IEEE INTERNATIONAL WORKSHOP ON INTELLIGENT DATA ACQUISITION AND ADVANCED COMPUTING SYSTEMS: TECHNOLOGY AND APPLICATIONS, 2003, : 185 - 190
  • [17] An on-line learning algorithm for recurrent neural networks using variational methods
    Oh, WG
    Suh, BS
    [J]. 40TH MIDWEST SYMPOSIUM ON CIRCUITS AND SYSTEMS, VOLS 1 AND 2, 1998, : 659 - 662
  • [18] On-Line Extreme Learning Machine for Training Time-Varying Neural Networks
    Ye, Yibin
    Squartini, Stefano
    Piazza, Francesco
    [J]. BIO-INSPIRED COMPUTING AND APPLICATIONS, 2012, 6840 : 49 - 54
  • [19] Continual on-line training of neural networks with applications to electric machine fault diagnostics
    Tallam, RM
    Habetler, TG
    Harley, RG
    [J]. PESC 2001: 32ND ANNUAL POWER ELECTRONICS SPECIALISTS CONFERENCE, VOLS 1-4, CONFERENCE PROCEEDINGS, 2001, : 2224 - 2228
  • [20] Theoretical analysis of batch and on-line training for gradient descent learning in neural networks
    Nakama, Takehiko
    [J]. NEUROCOMPUTING, 2009, 73 (1-3) : 151 - 159