Optimization of feedforward neural networks

被引:22
|
作者
Han, J
Moraga, C
Sinne, S
机构
[1] Res. Grp. Computational Intelligence, Department of Computer Science I, University of Dortmund
关键词
feedforward neural networks; soft computing; parametric nets;
D O I
10.1016/0952-1976(95)00001-1
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper presents some novel approaches in the design of neural networks with one or two hidden layers trained by the backpropagation algorithm. First, hybrid neural networks that have different activation functions for different layers in fully connected feedforward neural networks are introduced. Second, a variant sigmoid function with three parameters is discussed. The parameters are the dynamic range, symmetry and slope of the function respectively, It is illustrated how these parameters influence the speed of backpropagation learning, and a parametric feedforward network with different parameter configurations in different layers is introduced. By regulating and modifying parameter configurations of the sigmoid function in different layers the error signal problem, oscillation problem and asymmetrical input problem can be reduced. Furthermore, hybrid optimization methods for the dynamic parameters are introduced: Genetic algorithms are used to optimize the initial parameter configuration. The dynamic parameters are adjusted on-line using gradient descent methods. Sequential adjustment algorithms are derived in order to avoid the moving target problem and to increase the stability of the gradient descent methods. The new schemes have advantages in both convergence speed and generalization capability. Experimental results on the two-spirals problem are provided. Copyright (C) 1996 Elsevier Science Ltd
引用
收藏
页码:109 / 119
页数:11
相关论文
共 50 条
  • [1] Global Optimization of Feedforward Neural Networks
    LIANG Xun XIA Shaowei Department of Automation
    [J]. Journal of Systems Science and Systems Engineering, 1993, (03) : 273 - 280
  • [2] A global optimization algorithm for training feedforward neural networks
    Huanqin Li
    [J]. DYNAMICS OF CONTINUOUS DISCRETE AND IMPULSIVE SYSTEMS-SERIES B-APPLICATIONS & ALGORITHMS, 2006, 13 : 846 - 849
  • [3] Learning as a multi - Objective optimization in feedforward neural networks
    Dumitras, A
    Lazarescu, V
    Negoita, M
    [J]. FIRST INTERNATIONAL CONFERENCE ON KNOWLEDGE-BASED INTELLIGENT ELECTRONIC SYSTEMS, PROCEEDINGS 1997 - KES '97, VOLS 1 AND 2, 1997, : 588 - 593
  • [4] OPTIMIZATION OF THE HIDDEN UNIT FUNCTION IN FEEDFORWARD NEURAL NETWORKS
    FUJITA, O
    [J]. NEURAL NETWORKS, 1992, 5 (05) : 755 - 764
  • [5] A hybrid algorithm for weight and connectivity optimization in feedforward neural networks
    Pettersson, F
    Saxén, H
    [J]. ARTIFICIAL NEURAL NETS AND GENETIC ALGORITHMS, PROCEEDINGS, 2003, : 47 - 52
  • [6] Hybrid optimization of feedforward neural networks for handwritten character recognition
    Utschick, W
    Nossek, JA
    [J]. 1997 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOLS I - V: VOL I: PLENARY, EXPERT SUMMARIES, SPECIAL, AUDIO, UNDERWATER ACOUSTICS, VLSI; VOL II: SPEECH PROCESSING; VOL III: SPEECH PROCESSING, DIGITAL SIGNAL PROCESSING; VOL IV: MULTIDIMENSIONAL SIGNAL PROCESSING, NEURAL NETWORKS - VOL V: STATISTICAL SIGNAL AND ARRAY PROCESSING, APPLICATIONS, 1997, : 147 - 150
  • [7] A fast hybrid algorithm of global optimization for feedforward neural networks
    Jiang, MH
    Zhang, B
    Zhu, XY
    Jinag, MY
    [J]. CHINESE JOURNAL OF ELECTRONICS, 2001, 10 (02) : 214 - 218
  • [8] A fast hybrid algorithm of global optimization for feedforward neural networks
    Jiang, MH
    Zhu, XY
    Yuan, BZ
    Tang, XF
    Lin, BQ
    Ruan, QQ
    Jiang, MY
    [J]. 2000 5TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING PROCEEDINGS, VOLS I-III, 2000, : 1609 - 1612
  • [9] Optimization-based learning with bounded error for feedforward neural networks
    Alessandri, A
    Sanguineti, M
    Maggiore, M
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2002, 13 (02): : 261 - 273
  • [10] An improved particle swarm optimization for evolving feedforward artificial neural networks
    Yu, Jianbo
    Xi, Lifeng
    Wang, Shijin
    [J]. NEURAL PROCESSING LETTERS, 2007, 26 (03) : 217 - 231