On derivation of stagewise second-order backpropagation by invariant imbedding for multi-stage neural-network learning

被引:0
|
作者
Mizutani, Eiji [1 ]
Dreyfus, Stuart [2 ]
机构
[1] Natl Tsing Hua Univ, Dept Comp Sci, Hsinchu 300, Taiwan
[2] Univ Calif Berkeley, Dept Ind Engn & Operat Res, Berkeley, CA 94720 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present a simple, intuitive argument based on "invariant imbedding" in the spirit of dynamic programming to derive a stagewise second-order backpropagation (BP) algorithm. The method evaluates the Hessian matrix of a general objective function efficiently by exploiting the multi-stage structure embedded in a given neural-network model such as a multilayer perceptron (MLP). In consequence, for instance, our stagewise BP can compute the full Hessian matrix "faster" than the standard method that evaluates the Gauss-Newton Hessian matrix alone by rank updates in nonlinear least squares learning. Through our derivation, we also show how the procedure serves to develop advanced learning algorithms; in particular, we explain how the introduction of "stage costs" leads to alternative systematic implementations of multi-task learning and weight decay.
引用
下载
收藏
页码:4762 / +
页数:2
相关论文
共 36 条
  • [1] On practical use of stagewise second-order backpropagation for multi-stage neural-network learning
    Mizutani, Eiji
    Dreyfus, Stuart
    2007 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-6, 2007, : 1302 - +
  • [2] A Novel Iterative Second-Order Neural-Network Learning Control Approach for Robotic Manipulators
    Ba, Dang Xuan
    Thien, Nguyen Trung
    Bae, Joonbum
    IEEE ACCESS, 2023, 11 : 58318 - 58332
  • [3] Multi-stage numerical method of collocations for solving second-order ODEs
    Lovetskiy, Konstantin P.
    Kulyabov, Dmitry S.
    Sevastianov, Leonid A.
    Sergeev, Stepan V.
    VESTNIK TOMSKOGO GOSUDARSTVENNOGO UNIVERSITETA-UPRAVLENIE VYCHISLITELNAJA TEHNIKA I INFORMATIKA-TOMSK STATE UNIVERSITY JOURNAL OF CONTROL AND COMPUTER SCIENCE, 2023, (63): : 45 - 52
  • [5] A Note on Liao’s Recurrent Neural-Network Learning for Discrete Multi-stage Optimal Control Problems
    Eiji Mizutani
    Neural Processing Letters, 2019, 50 : 3009 - 3018
  • [6] A second-order diagonally-implicit-explicit multi-stage integration method
    Zhang, Hong
    Sandu, Adrian
    PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE, ICCS 2012, 2012, 9 : 1039 - 1046
  • [7] Second-order recurrent neural network for word sequence learning
    Kwan, HK
    Yan, J
    PROCEEDINGS OF 2001 INTERNATIONAL SYMPOSIUM ON INTELLIGENT MULTIMEDIA, VIDEO AND SPEECH PROCESSING, 2001, : 405 - 408
  • [8] Learning the initial state of a second-order recurrent neural network during regular-language inference
    Forcada, Mikel I.
    Carrasco, Rafael C.
    Neural Computation, 1995, 7 (05):
  • [9] Integrating Self-Organizing Neural Network and Motivated Learning for Coordinated Multi-Agent Reinforcement Learning in Multi-Stage Stochastic Game
    Teng, Teck-Hou
    Tan, Ah-Hwee
    Starzyk, Janusz A.
    Tan, Yuan-Sin
    Teow, Loo-Nin
    PROCEEDINGS OF THE 2014 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2014, : 4229 - 4236
  • [10] Self-organizing radial basis function neural network using accelerated second-order learning algorithm
    Han, Hong-Gui
    Ma, Miao-Li
    Yang, Hong-Yan
    Qiao, Jun-Fei
    NEUROCOMPUTING, 2022, 469 : 1 - 12