Evolutionary training of hardware realizable multilayer perceptrons

被引:11
|
作者
Plagianakos, VP [1 ]
Magoulas, GD
Vrahatis, MN
机构
[1] Univ Patras Articicial Intelligence, Res Ctr, Computat Intelligence Lab, Dept Math, GR-26110 Patras, Greece
[2] Univ London, Sch Comp Sci & Informat Syst, London WC1E 7HX, England
来源
NEURAL COMPUTING & APPLICATIONS | 2006年 / 15卷 / 01期
关键词
feedforward neural networks; backpropagation algorithm; neural networks with threshold activations; integer weight neural networks; integer programming; steepest descent; unconstrained optimization; differential evolution;
D O I
10.1007/s00521-005-0005-y
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The use of multilayer perceptrons (MLP) with threshold functions (binary step function activations) greatly reduces the complexity of the hardware implementation of neural networks, provides tolerance to noise and improves the interpretation of the internal representations. In certain case, such as in learning stationary tasks, it may be sufficient to find appropriate weights for an MLP with threshold activation functions by software simulation and, then, transfer the weight values to the hardware implementation. Efficient training of these networks is a subject of considerable ongoing research. Methods available in the literature mainly focus on two-state (threshold) nodes and try to train the networks by approximating the gradient of the error function and modifying appropriately the gradient descent, or by progressively altering the shape of the activation functions. In this paper, we propose an evolution-motivated approach, which is eminently suitable for networks with threshold functions and compare its performance with four other methods. The proposed evolutionary strategy does not need gradient related information, it is applicable to a situation where threshold activations are used from the beginning of the training, as in "on-chip" training, and is able to train networks with integer weights.
引用
收藏
页码:33 / 40
页数:8
相关论文
共 50 条
  • [1] Evolutionary training of hardware realizable multilayer perceptrons
    V. P. Plagianakos
    G. D. Magoulas
    M. N. Vrahatis
    [J]. Neural Computing & Applications, 2006, 15 : 33 - 40
  • [2] Fast training of multilayer perceptrons
    Verma, B
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 1997, 8 (06): : 1314 - 1320
  • [3] An adaptive method of training multilayer perceptrons
    Lo, JT
    Bassu, D
    [J]. IJCNN'01: INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-4, PROCEEDINGS, 2001, : 2013 - 2018
  • [4] Robust formulations for training multilayer perceptrons
    Kärkkäinen, T
    Heikkola, E
    [J]. NEURAL COMPUTATION, 2004, 16 (04) : 837 - 862
  • [5] Training multilayer perceptrons parameter by parameter
    Li, YL
    Wang, KQ
    [J]. PROCEEDINGS OF THE 2004 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS, VOLS 1-7, 2004, : 3397 - 3401
  • [6] Efficient block training of multilayer perceptrons
    Navia-Vázquez, A
    Figueiras-Vidal, AR
    [J]. NEURAL COMPUTATION, 2000, 12 (06) : 1429 - 1447
  • [7] A NEW ALGORITHM FOR TRAINING MULTILAYER PERCEPTRONS
    PALMIERI, F
    SHAH, SA
    [J]. 1989 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS, VOLS 1-3: CONFERENCE PROCEEDINGS, 1989, : 427 - 428
  • [8] Natural conjugate gradient training of multilayer perceptrons
    Gonzalez, Ana
    Dorronsoro, Jose R.
    [J]. ARTIFICIAL NEURAL NETWORKS - ICANN 2006, PT 1, 2006, 4131 : 169 - 177
  • [9] Training multilayer perceptrons by principal component analysis
    Biehl, M
    Bunzmann, C
    Urbanczik, R
    [J]. PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS, 2001, 302 (1-4) : 56 - 63
  • [10] The role of weight domain in evolutionary design of multilayer perceptrons
    Grzenda, M
    Macukow, B
    [J]. IJCNN 2000: PROCEEDINGS OF THE IEEE-INNS-ENNS INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOL VI, 2000, : 596 - 599