A very fast learning method for neural networks based on sensitivity analysis

被引:0
|
作者
Castillo, Enrique [1 ]
Guijarro-Berdinas, Bertha
Fontenla-Romero, Oscar
Alonso-Betanzos, Amparo
机构
[1] Univ Cantabria, Dept Appl Math & Computat Sci, E-39005 Santander, Spain
[2] Univ Castilla La Mancha, Santander 39005, Spain
[3] Univ A Coruna, Fac Informat, Dept Comp Sci, La Coruna 15071, Spain
关键词
supervised learning; neural networks; linear optimization; least-squares; initialization method; sensitivity analysis;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper introduces a learning method for two-layer feedforward neural networks based on sensitivity analysis, which uses a linear training algorithm for each of the two layers. First, random values are assigned to the outputs of the first layer; later, these initial values are updated based on sensitivity formulas, which use the weights in each of the layers; the process is repeated until convergence. Since these weights are learnt solving a linear system of equations, there is an important saving in computational time. The method also gives the local sensitivities of the least square errors with respect to input and output data, with no extra computational cost, because the necessary information becomes available without extra calculations. This method, called the Sensitivity-Based Linear Learning Method, can also be used to provide an initial set of weights, which significantly improves the behavior of other learning algorithms. The theoretical basis for the method is given and its performance is illustrated by its application to several examples in which it is compared with several learning algorithms and well known data sets. The results have shown a learning speed generally faster than other existing methods. In addition, it can be used as an initialization tool for other well known methods with significant improvements.
引用
收藏
页码:1159 / 1182
页数:24
相关论文
共 50 条
  • [1] An Incremental Learning Method for Neural Networks Based on Sensitivity Analysis
    Perez-Sanchez, Beatriz
    Fontenla-Romero, Oscar
    Guijarro-Berdinas, Bertha
    CURRENT TOPICS IN ARTIFICIAL INTELLIGENCE, 2010, 5988 : 42 - 50
  • [2] A Supervised Learning Method for Neural Networks Based on Sensitivity Analysis with Automatic Regularization
    Perez-Sanchez, Beatriz
    Fontenla-Romero, Oscar
    Guijarro-Berdinas, Bertha
    BIO-INSPIRED SYSTEMS: COMPUTATIONAL AND AMBIENT INTELLIGENCE, PT 1, 2009, 5517 : 157 - 164
  • [3] Fast learning method for RAAM based on sensitivity analysis
    Barcz, A.
    PHOTONICS APPLICATIONS IN ASTRONOMY, COMMUNICATIONS, INDUSTRY, AND HIGH-ENERGY PHYSICS EXPERIMENTS 2014, 2014, 9290
  • [4] A fast learning method for feedforward neural networks
    Wang, Shitong
    Chung, Fu-Lai
    Wang, Jun
    Wu, Jun
    NEUROCOMPUTING, 2015, 149 : 295 - 307
  • [5] Sensitivity Analysis of the Neural Networks Randomized Learning
    Dudek, Grzegorz
    ARTIFICIAL INTELLIGENCEAND SOFT COMPUTING, PT I, 2019, 11508 : 51 - 61
  • [6] Sensitivity analysis for selective learning by feedforward neural networks
    Engelbrecht, AP
    FUNDAMENTA INFORMATICAE, 2001, 45 (04) : 295 - 328
  • [7] Sensitivity analysis for selective learning by feedforward neural networks
    Engelbrecht, AP
    FUNDAMENTA INFORMATICAE, 2001, 46 (03) : 219 - 252
  • [8] Sensitivity analysis for selective learning by feedforward neural networks
    Engelbrecht, Andries P.
    2001, IOS Press (45)
  • [9] A fast adaptive backstepping method based on neural networks
    College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
    Yuhang Xuebao, 2008, 6 (1888-1894):
  • [10] Fast Structural Learning of Distance-Based Neural Networks
    Tominga, Naoki
    Zhao, Qiangfu
    IJCNN: 2009 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1- 6, 2009, : 3370 - 3377