A Computationally Efficient Weight Pruning Algorithm for Artificial Neural Network Classifiers

被引:0
|
作者
Sakshi [1 ]
Kumar, Ravi [1 ]
机构
[1] Thapar Univ, Elect & Commun Engn Dept, Patiala 147004, Punjab, India
关键词
Weight pruning; Artificial neural network; Backpropagation; Complexity penalty; Fisher information; Pattern classification; MULTILAYER PERCEPTRONS;
D O I
10.1007/s13369-017-2887-2
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
A novel technique is being proposed to prune the weights of artificial neural networks (ANNs) while training with backpropagation algorithm. Iterative update of weights through gradient descent mechanism does not guarantee convergence in a specified number of epochs. Pruning of non-relevant weights not only reduces the computational complexity but also improves the classification performance. This algorithm first defines the relevance of initialized weights in a statistical sense by introducing a coefficient of dominance for each weight converging on a hidden node and subsequently employing the concept of complexity penalty. Based upon complexity penalty for each weight, a decision has been taken to either prune or retain the weight. It has been shown analytically that a weight with higher complexity penalty has a high degree of Fisher information which further implies its ability to capture the variations in the input set for better classification. Simulation experiments performed with five benchmark data sets reveal that ANNs trained after being pruned using the proposed technique exhibit higher convergence, lower execution time and higher success rate in the test phase and yields substantial reduction in computational resources. For complex architectures, early convergence was found to be directly correlated with percentage of weights pruned. The efficacy of the technique has been validated on several benchmark datasets having large diversity of attributes.
引用
收藏
页码:6787 / 6799
页数:13
相关论文
共 50 条
  • [41] Convolutional neural network acceleration algorithm based on filters pruning
    Li H.
    Zhao W.-J.
    Han B.
    Zhejiang Daxue Xuebao (Gongxue Ban)/Journal of Zhejiang University (Engineering Science), 2019, 53 (10): : 1994 - 2002
  • [42] Efficient fetal size classification combined with artificial neural network for estimation of fetal weight
    Cheng, Yueh-Chin
    Yan, Gwo-Lang
    Chiu, Yu Hsien
    Chang, Fong-Ming
    Chang, Chiung-Hsin
    Chung, Kao-Chi
    TAIWANESE JOURNAL OF OBSTETRICS & GYNECOLOGY, 2012, 51 (04): : 545 - 553
  • [43] A simple neural network pruning algorithm with application to filter synthesis
    Suzuki, K
    Horiba, I
    Sugie, N
    NEURAL PROCESSING LETTERS, 2001, 13 (01) : 43 - 53
  • [44] Weed Detection Algorithm Based on Dynamic Pruning Neural Network
    Kang J.
    Liu G.
    Wang Q.
    Xia Y.
    Guo G.
    Liu W.
    Nongye Jixie Xuebao/Transactions of the Chinese Society for Agricultural Machinery, 2023, 54 (04): : 268 - 275
  • [45] A Simple Neural Network Pruning Algorithm with Application to Filter Synthesis
    Kenji Suzuki
    Isao Horiba
    Noboru Sugie
    Neural Processing Letters, 2001, 13 : 43 - 53
  • [46] An adaptive growing and pruning algorithm for designing recurrent neural network
    Han, Hong-Gui
    Zhang, Shuo
    Qiao, Jun-Fei
    NEUROCOMPUTING, 2017, 242 : 51 - 62
  • [47] A neural network algorithm for fast pruning based on remarkable analysis
    Li Fujin
    Huo Meijie
    Ren Hongge
    Zhao Wenbin
    26TH CHINESE CONTROL AND DECISION CONFERENCE (2014 CCDC), 2014, : 184 - 188
  • [48] A Novel Pruning Algorithm for Self-organizing Neural Network
    Honggui, Han
    Junfei, Qiao
    IJCNN: 2009 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1- 6, 2009, : 22 - 27
  • [49] Deep neural network pruning algorithm based on particle swarm
    Zhang, Shengnan
    Hong, Shanshan
    Wu, Chao
    Liu, Yu
    Ju, Xiaoming
    2020 INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING AND HUMAN-COMPUTER INTERACTION (ICHCI 2020), 2020, : 367 - 371
  • [50] Development of the algorithm of adaptive construction of hierarchical neural network classifiers
    Svetlov V.A.
    Dolenko S.A.
    Optical Memory and Neural Networks, 2017, 26 (1) : 40 - 46