A very fast learning method for neural networks based on sensitivity analysis

被引:0
|
作者
Castillo, Enrique [1 ]
Guijarro-Berdinas, Bertha
Fontenla-Romero, Oscar
Alonso-Betanzos, Amparo
机构
[1] Univ Cantabria, Dept Appl Math & Computat Sci, E-39005 Santander, Spain
[2] Univ Castilla La Mancha, Santander 39005, Spain
[3] Univ A Coruna, Fac Informat, Dept Comp Sci, La Coruna 15071, Spain
关键词
supervised learning; neural networks; linear optimization; least-squares; initialization method; sensitivity analysis;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper introduces a learning method for two-layer feedforward neural networks based on sensitivity analysis, which uses a linear training algorithm for each of the two layers. First, random values are assigned to the outputs of the first layer; later, these initial values are updated based on sensitivity formulas, which use the weights in each of the layers; the process is repeated until convergence. Since these weights are learnt solving a linear system of equations, there is an important saving in computational time. The method also gives the local sensitivities of the least square errors with respect to input and output data, with no extra computational cost, because the necessary information becomes available without extra calculations. This method, called the Sensitivity-Based Linear Learning Method, can also be used to provide an initial set of weights, which significantly improves the behavior of other learning algorithms. The theoretical basis for the method is given and its performance is illustrated by its application to several examples in which it is compared with several learning algorithms and well known data sets. The results have shown a learning speed generally faster than other existing methods. In addition, it can be used as an initialization tool for other well known methods with significant improvements.
引用
收藏
页码:1159 / 1182
页数:24
相关论文
共 50 条
  • [21] ART neural networks for medical data analysis and fast distributed learning
    Carpenter, GA
    Milenova, BL
    ARTIFICIAL NEURAL NETWORKS IN MEDICINE AND BIOLOGY, 2000, : 10 - 17
  • [22] Learning Automata Based Incremental Learning Method for Deep Neural Networks
    Guo, Haonan
    Wang, Shilin
    Fan, Jianxun
    Li, Shenghong
    IEEE ACCESS, 2019, 7 (41164-41171) : 41164 - 41171
  • [23] Fast Learning Method of Interval Type-2 Fuzzy Neural Networks
    Olczyk, Damian
    Markowska-Kaczmar, Urszula
    2014 14TH UK WORKSHOP ON COMPUTATIONAL INTELLIGENCE (UKCI), 2014, : 134 - 139
  • [24] Structure Optimization of BP Neural Networks Based on Sensitivity Analysis
    Zhao, Jian
    Shen, Yunzhong
    2011 INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS AND NEURAL COMPUTING (FSNC 2011), VOL V, 2011, : 531 - 535
  • [25] Structure Optimization of BP Neural Networks Based on Sensitivity Analysis
    Zhao, Jian
    Shen, Yunzhong
    2011 AASRI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND INDUSTRY APPLICATION (AASRI-AIIA 2011), VOL 1, 2011, : 77 - 81
  • [26] Fractal learning of fast orthogonal neural networks
    A. Yu. Dorogov
    Optical Memory and Neural Networks, 2012, 21 (2) : 105 - 118
  • [27] Fast learning algorithms for feedforward neural networks
    Jiang, MH
    Gielen, G
    Zhang, B
    Luo, ZS
    APPLIED INTELLIGENCE, 2003, 18 (01) : 37 - 54
  • [28] A fast magnitude estimation method based on deep convolutional neural networks
    Wang ZiFa
    Liao JiAn
    Wang YanWei
    Wei DongLiang
    Zhao DengKe
    CHINESE JOURNAL OF GEOPHYSICS-CHINESE EDITION, 2023, 66 (01): : 272 - 288
  • [29] Fast Learning Algorithms for Feedforward Neural Networks
    Minghu Jiang
    Georges Gielen
    Bo Zhang
    Zhensheng Luo
    Applied Intelligence, 2003, 18 : 37 - 54
  • [30] The Learning Algorithm Based on Multiresolution Analysis for Neural Networks
    Han, Min
    Yin, Jia
    Li, Yang
    2008 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-8, 2008, : 783 - 787