Accelerating the training of feedforward neural networks using generalized Hebbian rules for initializing the internal representations

被引:17
|
作者
Karayiannis, NB
机构
[1] Department of Electrical and Computer Engineering, University of Houston
来源
关键词
D O I
10.1109/72.485677
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents an unsupervised learning scheme for initializing the internal representations of feedforward neural networks, which accelerates the convergence of supervised learning algorithms. It is proposed in this paper that the initial set of internal representations can be formed through a bottom-up unsupervised learning process applied before the top-down supervised training algorithm. The synaptic weights that connect the input of the network with the hidden units can be determined through linear or nonlinear variations of a generalized Hebbian learning rule, known as Oja's rule. Various generalized Hebbian rules were experimentally tested and evaluated in terms of their effect on the convergence of the supervised training process. Several experiments indicated that the use of the proposed initialization of the internal representations significantly improves the convergence of gradient-descent-based algorithms used to perform nontrivial training tasks. The improvement of the convergence becomes significant as the size and complexity of the training task increase.
引用
收藏
页码:419 / 426
页数:8
相关论文
共 50 条
  • [1] Improved training rules for multilayered feedforward neural networks
    Sung, SW
    Lee, TY
    Park, S
    [J]. INDUSTRIAL & ENGINEERING CHEMISTRY RESEARCH, 2003, 42 (06) : 1275 - 1278
  • [2] Contrastive Hebbian Feedforward Learning for Neural Networks
    Kermiche, Noureddine
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (06) : 2118 - 2128
  • [3] Training feedforward neural networks using neural networks and genetic algorithms
    Tellez, P
    Tang, Y
    [J]. INTERNATIONAL CONFERENCE ON COMPUTING, COMMUNICATIONS AND CONTROL TECHNOLOGIES, VOL 1, PROCEEDINGS, 2004, : 308 - 311
  • [4] On the comparison of random and Hebbian weights for the training of single-hidden layer feedforward neural networks
    Samiee, Kaveh
    Iosifidis, Alexandros
    Gabbouj, Moncef
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2017, 83 : 177 - 186
  • [5] ON TRAINING FEEDFORWARD NEURAL NETWORKS
    KAK, S
    [J]. PRAMANA-JOURNAL OF PHYSICS, 1993, 40 (01): : 35 - 42
  • [6] Training feedforward neural networks using genetic algorithms
    [J]. 1600, Morgan Kaufmann Publ Inc, San Mateo, CA, USA (01):
  • [7] Generalized feedforward control using physics informed neural networks
    Bolderman, M.
    Fan, D.
    Lazar, M.
    Butler, H.
    [J]. IFAC PAPERSONLINE, 2022, 55 (16): : 148 - 153
  • [8] ROBUSTNESS OF REPRESENTATIONS IN MULTILAYER FEEDFORWARD NEURAL NETWORKS
    DIAMOND, P
    FOMENKO, IV
    [J]. CIRCUITS SYSTEMS AND SIGNAL PROCESSING, 1993, 12 (02) : 211 - 221
  • [9] Using spotted hyena optimizer for training feedforward neural networks
    Luo, Qifang
    Li, Jie
    Zhou, Yongquan
    Liao, Ling
    [J]. COGNITIVE SYSTEMS RESEARCH, 2021, 65 (65): : 1 - 16
  • [10] Training multilayer feedforward neural networks using dynamic programming
    Sun, M
    [J]. PROCEEDINGS OF THE TWENTY-EIGHTH SOUTHEASTERN SYMPOSIUM ON SYSTEM THEORY, 1996, : 163 - 167