Neural Networks with Marginalized Corrupted Hidden Layer

被引:1
|
作者
Li, Yanjun [1 ]
Xin, Xin [1 ]
Guo, Ping [1 ,2 ]
机构
[1] Beijing Inst Technol, Sch Comp Sci & Technol, Beijing 100081, Peoples R China
[2] Beijing Normal Univ, Image Proc & Pattern Recognit Lab, Beijing 100875, Peoples R China
来源
关键词
Neural network; Overfitting; Classification; REPRESENTATIONS;
D O I
10.1007/978-3-319-26555-1_57
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Overfitting is an important problem in neural networks (NNs) training. When the number of samples in the training set is limited, explicitly extending the training set with artificially generated samples is an effective solution. However, this method has the problem of high computational costs. In this paper we propose a new learning scheme to train single-hidden layer feedforward neural networks (SLFNs) with implicitly extended training set. The training set is extended by corrupting the hidden layer outputs of training samples with noise from exponential family distribution. When the number of corruption approaches infinity, in objective function explicitly generated samples can be expressed as the form of expectation. Our method, called marginalized corrupted hidden layer (MCHL), trains SLFNs by minimizing the loss function expected values under the corrupting distribution. In this way MCHL is trained with infinite samples. Experimental results on multiple data sets show that MCHL can be trained efficiently, and generalizes better to test data.
引用
收藏
页码:506 / 514
页数:9
相关论文
共 50 条
  • [1] Neural networks for word recognition: Is a hidden layer necessary?
    Dandurand, Frederic
    Hannagan, Thomas
    Grainger, Jonathan
    COGNITION IN FLUX, 2010, : 688 - 693
  • [2] Regularization of hidden layer unit response for neural networks
    Taga, K
    Kameyama, K
    Toraichi, K
    2003 IEEE PACIFIC RIM CONFERENCE ON COMMUNICATIONS, COMPUTERS, AND SIGNAL PROCESSING, VOLS 1 AND 2, CONFERENCE PROCEEDINGS, 2003, : 348 - 351
  • [3] HOW TO DETERMINE THE STUCTRUE OF THE HIDDEN LAYER IN NEURAL NETWORKS
    魏强
    张士军
    张勇传
    水电能源科学, 1997, (01) : 18 - 22
  • [4] Feedforward Neural Networks with a Hidden Layer Regularization Method
    Alemu, Habtamu Zegeye
    Wu, Wei
    Zhao, Junhong
    SYMMETRY-BASEL, 2018, 10 (10):
  • [5] Modular Expansion of the Hidden Layer in Single Layer Feedforward Neural Networks
    Tissera, Migel D.
    McDonnell, Mark D.
    2016 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2016, : 2939 - 2945
  • [6] Collapsing multiple hidden layers in feedforward neural networks to a single hidden layer
    Blue, JL
    Hall, LO
    APPLICATIONS AND SCIENCE OF ARTIFICIAL NEURAL NETWORKS II, 1996, 2760 : 44 - 52
  • [7] Simplicity Bias in 1-Hidden Layer Neural Networks
    Morwani, Depen
    Batra, Jatin
    Jain, Prateek
    Netrapalli, Praneeth
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [8] A sequential learning approach for single hidden layer neural networks
    Zhang, J
    Morris, AJ
    NEURAL NETWORKS, 1998, 11 (01) : 65 - 80
  • [9] DEGREE OF APPROXIMATION BY NEURAL AND TRANSLATION NETWORKS WITH A SINGLE HIDDEN LAYER
    MHASKAR, HN
    MICCHELLI, CA
    ADVANCES IN APPLIED MATHEMATICS, 1995, 16 (02) : 151 - 183
  • [10] Training neural networks by marginalizing out hidden layer noise
    Yanjun Li
    Ping Guo
    Neural Computing and Applications, 2018, 29 : 401 - 412