Accelerated gradient learning algorithm for neural network weights update

被引:0
|
作者
Željko Hocenski
Mladen Antunoviæ
Damir Filko
机构
[1] University J.J. Strossmayer,Faculty of Electrical Engineering
来源
关键词
Neural network; Weights update; Gradient learning method; Parallel processing;
D O I
暂无
中图分类号
学科分类号
摘要
This work proposes decomposition of square approximation algorithm for neural network weights update. Suggested improvement results in alternative method that converge in less iteration and is inherently parallel. Decomposition enables parallel execution convenient for implementation on computer grid. Improvements are reflected in accelerated learning rate which may be essential for time critical decision processes. Proposed solution is tested and verified on multilayer perceptrons neural network case study, varying a wide range of parameters, such as number of inputs/outputs, length of input/output data, number of neurons and layers. Experimental results show time savings up to 40% in multiple thread execution.
引用
收藏
页码:219 / 225
页数:6
相关论文
共 50 条
  • [41] An Incremental Algorithm for Parallel Training of the Size and the Weights in a Feedforward Neural Network
    Kateřina Hlaváčková-Schindler
    Kateřina Hlaváčková-Schindler
    Manfred M. Fischer
    [J]. Neural Processing Letters, 2000, 11 : 131 - 138
  • [42] An Improved Neural Network with Random Weights Using Backtracking Search Algorithm
    Bingqing Wang
    Lijin Wang
    Yilong Yin
    Yunlong Xu
    Wenting Zhao
    Yuchun Tang
    [J]. Neural Processing Letters, 2016, 44 : 37 - 52
  • [43] Hybrid optimization algorithm for the definition of MLP neural network architectures and weights
    Lins, APS
    Ludermir, TB
    [J]. HIS 2005: 5th International Conference on Hybrid Intelligent Systems, Proceedings, 2005, : 149 - 154
  • [44] An in-the-loop training algorithm for neural network implementation with digital weights
    Yang, JM
    Jullien, GA
    Ahmadi, M
    Miller, WC
    [J]. INTELLIGENT SYSTEMS IN DESIGN AND MANUFACTURING, 1998, 3517 : 104 - 109
  • [45] Artificial neural network weights optimization design based on MEC algorithm
    He, XJ
    Zeng, JC
    Jie, J
    [J]. PROCEEDINGS OF THE 2004 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS, VOLS 1-7, 2004, : 3361 - 3364
  • [46] Research on three-step accelerated gradient algorithm in deep learning
    Lian, Yongqiang
    Tang, Yincai
    Zhou, Shirong
    [J]. STATISTICAL THEORY AND RELATED FIELDS, 2022, 6 (01) : 40 - 57
  • [47] A New Learning Algorithm with General Loss for Neural Networks with Random Weights
    Yao, Yunfei
    Li, Junfan
    Liao, Shizhong
    [J]. 2020 IEEE 32ND INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI), 2020, : 244 - 248
  • [48] Network Gradient Descent Algorithm for Decentralized Federated Learning
    Wu, Shuyuan
    Huang, Danyang
    Wang, Hansheng
    [J]. JOURNAL OF BUSINESS & ECONOMIC STATISTICS, 2023, 41 (03) : 806 - 818
  • [49] Jointly Learning Network Connections and Link Weights in Spiking Neural Networks
    Qi, Yu
    Shen, Jiangrong
    Wang, Yueming
    Tang, Huajin
    Yu, Hang
    Wu, Zhaohui
    Pan, Gang
    [J]. PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 1597 - 1603
  • [50] Granular weights in a neural network
    Dick, S
    Kandel, A
    [J]. JOINT 9TH IFSA WORLD CONGRESS AND 20TH NAFIPS INTERNATIONAL CONFERENCE, PROCEEDINGS, VOLS. 1-5, 2001, : 1708 - 1713