Neural Networks with Marginalized Corrupted Hidden Layer

被引:1
|
作者
Li, Yanjun [1 ]
Xin, Xin [1 ]
Guo, Ping [1 ,2 ]
机构
[1] Beijing Inst Technol, Sch Comp Sci & Technol, Beijing 100081, Peoples R China
[2] Beijing Normal Univ, Image Proc & Pattern Recognit Lab, Beijing 100875, Peoples R China
来源
关键词
Neural network; Overfitting; Classification; REPRESENTATIONS;
D O I
10.1007/978-3-319-26555-1_57
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Overfitting is an important problem in neural networks (NNs) training. When the number of samples in the training set is limited, explicitly extending the training set with artificially generated samples is an effective solution. However, this method has the problem of high computational costs. In this paper we propose a new learning scheme to train single-hidden layer feedforward neural networks (SLFNs) with implicitly extended training set. The training set is extended by corrupting the hidden layer outputs of training samples with noise from exponential family distribution. When the number of corruption approaches infinity, in objective function explicitly generated samples can be expressed as the form of expectation. Our method, called marginalized corrupted hidden layer (MCHL), trains SLFNs by minimizing the loss function expected values under the corrupting distribution. In this way MCHL is trained with infinite samples. Experimental results on multiple data sets show that MCHL can be trained efficiently, and generalizes better to test data.
引用
收藏
页码:506 / 514
页数:9
相关论文
共 50 条
  • [41] Approximation capability of two hidden layer feedforward neural networks with fixed weights
    Guliyev, Namig J.
    Ismailov, Vugar E.
    NEUROCOMPUTING, 2018, 316 : 262 - 269
  • [42] INTEGRATING GAUSSIAN MIXTURES INTO DEEP NEURAL NETWORKS: SOFTMAX LAYER WITH HIDDEN VARIABLES
    Tueske, Zoltan
    Tahir, Muhammad Ali
    Schlueter, Ralf
    Ney, Hermann
    2015 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING (ICASSP), 2015, : 4285 - 4289
  • [43] Robust adaptive nonlinear control using single hidden layer neural networks
    Nardi, F
    Calise, AJ
    PROCEEDINGS OF THE 39TH IEEE CONFERENCE ON DECISION AND CONTROL, VOLS 1-5, 2000, : 3825 - 3830
  • [44] Improving Rule Extraction from Neural Networks by Modifying Hidden Layer Representations
    Huynh, Thuan Q.
    Reggia, James A.
    IJCNN: 2009 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1- 6, 2009, : 734 - 739
  • [45] Training Single Hidden Layer Feedforward Neural Networks by Singular Value Decomposition
    Hieu Trung Huynh
    Won, Yonggwan
    ICCIT: 2009 FOURTH INTERNATIONAL CONFERENCE ON COMPUTER SCIENCES AND CONVERGENCE INFORMATION TECHNOLOGY, VOLS 1 AND 2, 2009, : 1300 - 1304
  • [46] Simultaneous approximations of multivariate functions and their derivatives by neural networks with one hidden layer
    Li, X
    NEUROCOMPUTING, 1996, 12 (04) : 327 - 343
  • [47] Bounds on the number of hidden neurons in three-layer binary neural networks
    Zhang, ZZ
    Ma, XM
    Yang, YX
    NEURAL NETWORKS, 2003, 16 (07) : 995 - 1002
  • [48] Analysis of one-hidden-layer Neural Networks via the Resolvent Method
    Piccolo, Vanessa
    Schroder, Dominik
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [49] A novel learning algorithm of single-hidden-layer feedforward neural networks
    Pu, Dong-Mei
    Gao, Da-Qi
    Ruan, Tong
    Yuan, Yu-Bo
    NEURAL COMPUTING & APPLICATIONS, 2017, 28 : S719 - S726
  • [50] Sensitivity analysis of single hidden-layer neural networks with threshold functions
    Electronics and Telecommunications, Research Inst, Daejeon, Korea, Republic of
    IEEE Trans Neural Networks, 4 (1005-1007):