Feedforward Neural Networks with a Hidden Layer Regularization Method

被引:25
|
作者
Alemu, Habtamu Zegeye [1 ]
Wu, Wei [1 ]
Zhao, Junhong [1 ]
机构
[1] Dalian Univ Technol, Sch Math Sci, Dalian 116024, Peoples R China
来源
SYMMETRY-BASEL | 2018年 / 10卷 / 10期
关键词
sparsity; feedforward neural networks; hidden layer regularization; group lasso; lasso; SMOOTHING L-1/2 REGULARIZATION; REGRESSION; ALGORITHM; SELECTION;
D O I
10.3390/sym10100525
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
In this paper, we propose a group Lasso regularization term as a hidden layer regularization method for feedforward neural networks. Adding a group Lasso regularization term into the standard error function as a hidden layer regularization term is a fruitful approach to eliminate the redundant or unnecessary hidden layer neurons from the feedforward neural network structure. As a comparison, a popular Lasso regularization method is introduced into standard error function of the network. Our novel hidden layer regularization method can force a group of outgoing weights to become smaller during the training process and can eventually be removed after the training process. This means it can simplify the neural network structure and it minimizes the computational cost. Numerical simulations are provided by using K-fold cross-validation method with k = 5 to avoid overtraining and to select the best learning parameters. The numerical results show that our proposed hidden layer regularization method prunes more redundant hidden layer neurons consistently for each benchmark dataset without loss of accuracy. In contrast, the existing Lasso regularization method prunes only the redundant weights of the network, but it cannot prune any redundant hidden layer neurons.
引用
收藏
页数:18
相关论文
共 50 条
  • [1] Input Layer Regularization of Multilayer Feedforward Neural Networks
    Li, Feng
    Zurada, Jacek M.
    Liu, Yan
    Wu, Wei
    [J]. IEEE ACCESS, 2017, 5 : 10979 - 10985
  • [2] Group L1/2 Regularization for Pruning Hidden Layer Nodes of Feedforward Neural Networks
    Alemu, Habtamu Zegeye
    Zhao, Junhong
    Li, Feng
    Wu, Wei
    [J]. IEEE ACCESS, 2019, 7 : 9540 - 9557
  • [3] Regularization of hidden layer unit response for neural networks
    Taga, K
    Kameyama, K
    Toraichi, K
    [J]. 2003 IEEE PACIFIC RIM CONFERENCE ON COMMUNICATIONS, COMPUTERS, AND SIGNAL PROCESSING, VOLS 1 AND 2, CONFERENCE PROCEEDINGS, 2003, : 348 - 351
  • [4] Modular Expansion of the Hidden Layer in Single Layer Feedforward Neural Networks
    Tissera, Migel D.
    McDonnell, Mark D.
    [J]. 2016 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2016, : 2939 - 2945
  • [5] Classification ability of single hidden layer feedforward neural networks
    Huang, GB
    Chen, YQ
    Babri, HA
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2000, 11 (03): : 799 - 801
  • [7] Collapsing multiple hidden layers in feedforward neural networks to a single hidden layer
    Blue, JL
    Hall, LO
    [J]. APPLICATIONS AND SCIENCE OF ARTIFICIAL NEURAL NETWORKS II, 1996, 2760 : 44 - 52
  • [8] Estimating the number of Hidden Nodes of the Single-hidden-layer Feedforward Neural Networks
    Cai, Guang-Wei
    Fang, Zhi
    Chen, Yue-Feng
    [J]. 2019 15TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND SECURITY (CIS 2019), 2019, : 172 - 176
  • [9] A new optimization algorithm for single hidden layer feedforward neural networks
    Li, Leong Kwan
    Shao, Sally
    Yiu, Ka-Fai Cedric
    [J]. APPLIED SOFT COMPUTING, 2013, 13 (05) : 2857 - 2862
  • [10] New error function for single hidden layer feedforward neural networks
    Li, Leong Kwan
    Lee, Richard Chak Hong
    [J]. CISP 2008: FIRST INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING, VOL 5, PROCEEDINGS, 2008, : 752 - 755