Radial Effect in Stochastic Diagonal Approximate Greatest Descent

被引:0
|
作者
Tan, Hong Hui [1 ]
Lim, King Hann [1 ]
Harno, Hendra G. [1 ]
机构
[1] Curtin Univ Malaysia, Fac Engn & Sci, Dept Elect & Comp Engn, Miri, Malaysia
关键词
radial effect; Stochastic Diagonal Approximate Greatest Descent;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Stochastic Diagonal Approximate Greatest Descent (SDAGD) is proposed to manage the optimization in two stages, (a) apply a radial boundary to estimate step length when the weights are far from solution, (b) apply Newton method when the weights are within the solution level set. This is inspired by a multi-stage decision control system where different strategies is used at different conditions. In numerical optimization context, larger steps should be taken at the beginning of optimization and gradually reduced when it is near to the minimum point. Nevertheless, the intuition of determining the radial boundary when the optimized parameters are far from the solution is yet to be investigated for high dimensional data. Radial step length in SDAGD manipulates the relative step length for iteration construction. SDAGD is implemented in a two layer Multilayer Perceptron to evaluate the effects of R on artificial neural networks. It is concluded that the greater the value of R, the higher the learning rate of SDAGD algorithm when the value of R is constrained in between 100 to 10,000.
引用
收藏
页码:226 / 229
页数:4
相关论文
共 50 条
  • [1] Stochastic Diagonal Approximate Greatest Descent in Neural Networks
    Tan, Hong Hui
    Lim, King Hann
    Harno, Hendra Gunawan
    [J]. 2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 1895 - 1898
  • [2] Stochastic Diagonal Approximate Greatest Descent in Convolutional Neural Networks
    Tan, Hong Hui
    Lim, King Hann
    Harno, Hendra G.
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON SIGNAL AND IMAGE PROCESSING APPLICATIONS (ICSIPA), 2017, : 451 - 454
  • [3] Vanishing Gradient Analysis in Stochastic Diagonal Approximate Greatest Descent Optimization
    Tan, Hong Hui
    Lim, King Hann
    [J]. JOURNAL OF INFORMATION SCIENCE AND ENGINEERING, 2020, 36 (05) : 1007 - 1019
  • [4] APPROXIMATE GREATEST DESCENT IN NEURAL NETWORK OPTIMIZATION
    Lim, King Hann
    Tan, Hong Hui
    Harno, Hendra G.
    [J]. NUMERICAL ALGEBRA CONTROL AND OPTIMIZATION, 2018, 8 (03): : 327 - 336
  • [5] Approximate Greatest Descent Methods for Optimization with Equality Constraints
    Goh, B. S.
    [J]. JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 2011, 148 (03) : 505 - 527
  • [6] Approximate Greatest Descent Methods for Optimization with Equality Constraints
    B. S. Goh
    [J]. Journal of Optimization Theory and Applications, 2011, 148 : 505 - 527
  • [7] Minibatch Approximate Greatest Descent on CIFAR-10 Dataset
    Tan, Hong Hui
    Lim, King Hann
    [J]. 2018 IEEE-EMBS CONFERENCE ON BIOMEDICAL ENGINEERING AND SCIENCES (IECBES), 2018, : 320 - 323
  • [8] Stochastic Gradient Descent as Approximate Bayesian Inference
    Mandt, Stephan
    Hoffman, Matthew D.
    Blei, David M.
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2017, 18
  • [9] Stochastic Approximate Gradient Descent via the Langevin Algorithm
    Qiu, Yixuan
    Wang, Xiao
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 5428 - 5435
  • [10] ON A TWO-PHASE APPROXIMATE GREATEST DESCENT METHOD FOR NONLINEAR OPTIMIZATION WITH EQUALITY CONSTRAINTS
    Lee, M. S.
    Goh, B. S.
    Harno, H. G.
    Lim, K. H.
    [J]. NUMERICAL ALGEBRA CONTROL AND OPTIMIZATION, 2018, 8 (03): : 315 - 326