A fast learning method for feedforward neural networks

被引:17
|
作者
Wang, Shitong [1 ,2 ]
Chung, Fu-Lai [2 ]
Wang, Jun [1 ]
Wu, Jun [1 ]
机构
[1] Jiangnan Univ, Sch Digital Media, Wuxi, Jiangsu, Peoples R China
[2] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Fast learning method; Feedforward neural network; Extreme learning machine; Hidden-feature-space ridge regression; MACHINE; APPROXIMATION; REGRESSION;
D O I
10.1016/j.neucom.2014.01.065
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In order to circumvent the weakness of very slow convergence of most traditional learning algorithms for single layer feedforward neural networks, the extreme learning machines (ELM) has been recently developed to achieve extremely fast learning with good performance by training only for the output weights. However, it cannot be applied to multiple-hidden layer feedforward neural networks (MI.FN), which is a challenging bottleneck of ELM. In this work, the novel fast learning method (KM) for feedforward neural networks is proposed. Firstly, based on the existing ridge regression theories, the hidden-feature-space ridge regression (HFSR) and centered ridge regression Centered-ELM are presented. Their connection with ELM is also theoretically revealed. As special kernel methods, they can inherently be used to propagate the prominent advantages of ELM into MU:N. Then, a novel fast learning method FLM for teedforward neural networks is proposed as a unified framework for HFSR and Centered-ELM. FLM can be applied for both SLFN and MLFN with a single or multiple outputs. In FLM, only the parameters in the last hidden layer require being adjusted while all the parameters in other hidden layers can be randomly assigned. The proposed FLM was tested against state of the art methods on real-world datasets and it provides better and more reliable results. (C) 2014 Elsevier B.V. All rights reserved.
引用
收藏
页码:295 / 307
页数:13
相关论文
共 50 条
  • [41] Feedforward neural networks initialization based on discriminant learning
    Chumachenko, Kateryna
    Iosifidis, Alexandros
    Gabbouj, Moncef
    [J]. NEURAL NETWORKS, 2022, 146 : 220 - 229
  • [42] A learning scheme for hardware implementation of feedforward neural networks
    Choi, MR
    [J]. INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL I AND II, 1999, : 508 - 512
  • [43] On learning feedforward neural networks with noise injection into inputs
    Seghouane, AK
    Moudden, Y
    Fleury, G
    [J]. NEURAL NETWORKS FOR SIGNAL PROCESSING XII, PROCEEDINGS, 2002, : 149 - 158
  • [44] Sensitivity analysis for selective learning by feedforward neural networks
    Engelbrecht, AP
    [J]. FUNDAMENTA INFORMATICAE, 2001, 45 (04) : 295 - 328
  • [45] A clustering approach to incremental learning for feedforward neural networks
    Engelbrecht, AP
    Brits, R
    [J]. IJCNN'01: INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-4, PROCEEDINGS, 2001, : 2019 - 2024
  • [46] A comparative study of the IDS method and feedforward neural networks
    Murakami, M
    Honda, N
    [J]. PROCEEDINGS OF THE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), VOLS 1-5, 2005, : 1776 - 1781
  • [47] A simple method to estimate the size of feedforward neural networks
    Li, YJ
    [J]. 2003 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS, VOLS 1-5, CONFERENCE PROCEEDINGS, 2003, : 1322 - 1326
  • [48] Feedforward Neural Networks with a Hidden Layer Regularization Method
    Alemu, Habtamu Zegeye
    Wu, Wei
    Zhao, Junhong
    [J]. SYMMETRY-BASEL, 2018, 10 (10):
  • [49] Method for the selection of inputs and structure of feedforward neural networks
    Saxen, H.
    Pettersson, F.
    [J]. COMPUTERS & CHEMICAL ENGINEERING, 2006, 30 (6-7) : 1038 - 1045
  • [50] A NOVEL DESIGN METHOD FOR MULTILAYER FEEDFORWARD NEURAL NETWORKS
    LEE, J
    [J]. NEURAL COMPUTATION, 1994, 6 (05) : 885 - 901