A very fast learning method for neural networks based on sensitivity analysis

被引:0
|
作者
Castillo, Enrique [1 ]
Guijarro-Berdinas, Bertha
Fontenla-Romero, Oscar
Alonso-Betanzos, Amparo
机构
[1] Univ Cantabria, Dept Appl Math & Computat Sci, E-39005 Santander, Spain
[2] Univ Castilla La Mancha, Santander 39005, Spain
[3] Univ A Coruna, Fac Informat, Dept Comp Sci, La Coruna 15071, Spain
关键词
supervised learning; neural networks; linear optimization; least-squares; initialization method; sensitivity analysis;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper introduces a learning method for two-layer feedforward neural networks based on sensitivity analysis, which uses a linear training algorithm for each of the two layers. First, random values are assigned to the outputs of the first layer; later, these initial values are updated based on sensitivity formulas, which use the weights in each of the layers; the process is repeated until convergence. Since these weights are learnt solving a linear system of equations, there is an important saving in computational time. The method also gives the local sensitivities of the least square errors with respect to input and output data, with no extra computational cost, because the necessary information becomes available without extra calculations. This method, called the Sensitivity-Based Linear Learning Method, can also be used to provide an initial set of weights, which significantly improves the behavior of other learning algorithms. The theoretical basis for the method is given and its performance is illustrated by its application to several examples in which it is compared with several learning algorithms and well known data sets. The results have shown a learning speed generally faster than other existing methods. In addition, it can be used as an initialization tool for other well known methods with significant improvements.
引用
收藏
页码:1159 / 1182
页数:24
相关论文
共 50 条
  • [31] A Curiosity-Based Learning Method for Spiking Neural Networks
    Shi, Mengting
    Zhang, Tielin
    Zeng, Yi
    FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2020, 14
  • [32] DEEP LEARNING BASED METHOD FOR PRUNING DEEP NEURAL NETWORKS
    Li, Lianqiang
    Zhu, Jie
    Sun, Ming-Ting
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW), 2019, : 312 - 317
  • [33] A coverage study of the CMSSM based on ATLAS sensitivity using fast neural networks techniques
    Michael Bridges
    Kyle Cranmer
    Farhan Feroz
    Mike Hobson
    Roberto Ruiz de Austri
    Roberto Trotta
    Journal of High Energy Physics, 2011
  • [34] A coverage study of the CMSSM based on ATLAS sensitivity using fast neural networks techniques
    Bridges, Michael
    Cranmer, Kyle
    Feroz, Farhan
    Hobson, Mike
    Ruiz de Austri, Roberto
    Trotta, Roberto
    JOURNAL OF HIGH ENERGY PHYSICS, 2011, (03):
  • [35] Modeling and sensitivity analysis of neural networks
    Lamy, D
    MATHEMATICS AND COMPUTERS IN SIMULATION, 1996, 40 (5-6) : 535 - 548
  • [36] Sensitivity Analysis of Deep Neural Networks
    Shu, Hai
    Zhu, Hongtu
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 4943 - 4950
  • [37] NeuralSens: Sensitivity Analysis of Neural Networks
    Pizarroso, Jaime
    Portela, Jose
    Munoz, Antonio
    JOURNAL OF STATISTICAL SOFTWARE, 2022, 102 (07): : 1 - 36
  • [38] Matrix analysis for fast learning of neural networks with application to the classification of acoustic spectraa)
    Paul, Vlad S.
    Nelson, Philip A.
    JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2021, 149 (06): : 4119 - 4133
  • [39] Fast Learning in Spiking Neural Networks by Learning Rate Adaptation
    Fang Huijuan
    Luo Jiliang
    Wang Fei
    CHINESE JOURNAL OF CHEMICAL ENGINEERING, 2012, 20 (06) : 1219 - 1224
  • [40] Online Fast Deep Learning Tracker Based on Deep Sparse Neural Networks
    Wang, Xin
    Hou, Zhiqiang
    Yu, Wangsheng
    Jin, Zefenfen
    IMAGE AND GRAPHICS (ICIG 2017), PT I, 2017, 10666 : 186 - 198