Diffusion learning algorithms for feedforward neural networks

被引:1
|
作者
Skorohod B.A. [1 ]
机构
[1] Sevastopol National Technical University, Sevastopol
关键词
extended Kalman filter; feedforward neural network; learning algorithm;
D O I
10.1007/s10559-013-9516-1
中图分类号
学科分类号
摘要
The problem of training feedforward neural networks is considered. To solve it, new algorithms are proposed. They are based on the asymptotic analysis of the extended Kalman filter (EKF) and on a separable network structure. Linear weights are interpreted as diffusion random variables with zero expectation and a covariance matrix proportional to an arbitrarily large parameter λ. Asymptotic expressions for the EKF are derived as λ→∞. They are called diffusion learning algorithms (DLAs). It is shown that they are robust with respect to the accumulation of rounding errors in contrast to their prototype EKF with a large but finite λ and that, under certain simplifying assumptions, an extreme learning machine (ELM) algorithm can be obtained from a DLA. A numerical example shows that the accuracy of a DLA may be higher than that of an ELM algorithm. © 2013 Springer Science+Business Media New York.
引用
收藏
页码:334 / 346
页数:12
相关论文
共 50 条
  • [1] Fast learning algorithms for feedforward neural networks
    Jiang, MH
    Gielen, G
    Zhang, B
    Luo, ZS
    [J]. APPLIED INTELLIGENCE, 2003, 18 (01) : 37 - 54
  • [2] Fast Learning Algorithms for Feedforward Neural Networks
    Minghu Jiang
    Georges Gielen
    Bo Zhang
    Zhensheng Luo
    [J]. Applied Intelligence, 2003, 18 : 37 - 54
  • [3] OPTIMAL FILTERING ALGORITHMS FOR FAST LEARNING IN FEEDFORWARD NEURAL NETWORKS
    SHAH, S
    PALMIERI, F
    DATUM, M
    [J]. NEURAL NETWORKS, 1992, 5 (05) : 779 - 787
  • [4] Randomized Algorithms for Feedforward Neural Networks
    Li Fan-jun
    Li Ying
    [J]. PROCEEDINGS OF THE 35TH CHINESE CONTROL CONFERENCE 2016, 2016, : 3664 - 3668
  • [5] Quantum Algorithms for Feedforward Neural Networks
    Allcock, Jonathan
    Hsieh, Chang-Yu
    Kerenidis, Iordanis
    Zhang, Shengyu
    [J]. ACM TRANSACTIONS ON QUANTUM COMPUTING, 2020, 1 (01):
  • [6] ANALYSIS OF GRADIENT DESCENT LEARNING ALGORITHMS FOR MULTILAYER FEEDFORWARD NEURAL NETWORKS
    GUO, H
    GELFAND, SB
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS, 1991, 38 (08): : 883 - 894
  • [7] Learning and structural design of feedforward neural networks by employing genetic algorithms
    Bohari, AR
    Mizuno, N
    [J]. SICE '96 - PROCEEDINGS OF THE 35TH SICE ANNUAL CONFERENCE: INTERNATIONAL SESSION PAPERS, 1996, : 1377 - 1382
  • [8] Constructive algorithms for structure learning in feedforward neural networks for regression problems
    Kwok, TY
    Yeung, DY
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 1997, 8 (03): : 630 - 645
  • [9] LEARNING AND FORGETTING ALGORITHMS FOR FEEDFORWARD NETWORKS
    KUH, A
    [J]. PROCEEDINGS OF THE 22ND CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS, VOLS 1 & 2, 1988, : 282 - 286
  • [10] Training feedforward neural networks using neural networks and genetic algorithms
    Tellez, P
    Tang, Y
    [J]. INTERNATIONAL CONFERENCE ON COMPUTING, COMMUNICATIONS AND CONTROL TECHNOLOGIES, VOL 1, PROCEEDINGS, 2004, : 308 - 311