Neural Networks with Marginalized Corrupted Hidden Layer

被引:1
|
作者
Li, Yanjun [1 ]
Xin, Xin [1 ]
Guo, Ping [1 ,2 ]
机构
[1] Beijing Inst Technol, Sch Comp Sci & Technol, Beijing 100081, Peoples R China
[2] Beijing Normal Univ, Image Proc & Pattern Recognit Lab, Beijing 100875, Peoples R China
来源
关键词
Neural network; Overfitting; Classification; REPRESENTATIONS;
D O I
10.1007/978-3-319-26555-1_57
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Overfitting is an important problem in neural networks (NNs) training. When the number of samples in the training set is limited, explicitly extending the training set with artificially generated samples is an effective solution. However, this method has the problem of high computational costs. In this paper we propose a new learning scheme to train single-hidden layer feedforward neural networks (SLFNs) with implicitly extended training set. The training set is extended by corrupting the hidden layer outputs of training samples with noise from exponential family distribution. When the number of corruption approaches infinity, in objective function explicitly generated samples can be expressed as the form of expectation. Our method, called marginalized corrupted hidden layer (MCHL), trains SLFNs by minimizing the loss function expected values under the corrupting distribution. In this way MCHL is trained with infinite samples. Experimental results on multiple data sets show that MCHL can be trained efficiently, and generalizes better to test data.
引用
收藏
页码:506 / 514
页数:9
相关论文
共 50 条
  • [31] Single-hidden layer neural networks for forecasting intermittent demand
    Lolli, F.
    Gamberini, R.
    Regattieri, A.
    Balugani, E.
    Gatos, T.
    Gucci, S.
    INTERNATIONAL JOURNAL OF PRODUCTION ECONOMICS, 2017, 183 : 116 - 128
  • [32] Efficiently Learning One Hidden Layer Neural Networks From Queries
    Chen, Sitan
    Klivans, Adam R.
    Meka, Raghu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [33] A new optimization algorithm for single hidden layer feedforward neural networks
    Li, Leong Kwan
    Shao, Sally
    Yiu, Ka-Fai Cedric
    APPLIED SOFT COMPUTING, 2013, 13 (05) : 2857 - 2862
  • [34] Comparisons of Single- and Multiple-Hidden-Layer Neural Networks
    Nakama, Takehiko
    ADVANCES IN NEURAL NETWORKS - ISNN 2011, PT I, 2011, 6675 : 270 - 279
  • [35] Learning with Marginalized Corrupted Features and Labels Together
    Li, Yingming
    Yang, Ming
    Xu, Zenglin
    Zhang, Zhongfei
    THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 1251 - 1257
  • [36] Fourier neural networks and generalized single hidden layer networks in aircraft engine fault diagnostics
    Tan, H. S.
    JOURNAL OF ENGINEERING FOR GAS TURBINES AND POWER-TRANSACTIONS OF THE ASME, 2006, 128 (04): : 773 - 782
  • [37] Guiding Hidden Layer Representations for Improved Rule Extraction from Neural Networks
    Huynh, Thuan Q.
    Reggia, James A.
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2011, 22 (02): : 264 - 275
  • [38] Evolutionary Algorithm for Training Compact Single Hidden Layer Feedforward Neural Networks
    Huynh, Hieu Trung
    Won, Yonggwan
    2008 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-8, 2008, : 3028 - 3033
  • [39] Robust and Resource-Efficient Identification of Two Hidden Layer Neural Networks
    Fornasier, Massimo
    Klock, Timo
    Rauchensteiner, Michael
    CONSTRUCTIVE APPROXIMATION, 2022, 55 (01) : 475 - 536
  • [40] Construction and initialization of a hidden layer of multilayer neural networks using linear programming
    Kim, LS
    CRITICAL TECHNOLOGY: PROCEEDINGS OF THE THIRD WORLD CONGRESS ON EXPERT SYSTEMS, VOLS I AND II, 1996, : 986 - 992