Neural Networks with Marginalized Corrupted Hidden Layer

被引:1
|
作者
Li, Yanjun [1 ]
Xin, Xin [1 ]
Guo, Ping [1 ,2 ]
机构
[1] Beijing Inst Technol, Sch Comp Sci & Technol, Beijing 100081, Peoples R China
[2] Beijing Normal Univ, Image Proc & Pattern Recognit Lab, Beijing 100875, Peoples R China
来源
关键词
Neural network; Overfitting; Classification; REPRESENTATIONS;
D O I
10.1007/978-3-319-26555-1_57
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Overfitting is an important problem in neural networks (NNs) training. When the number of samples in the training set is limited, explicitly extending the training set with artificially generated samples is an effective solution. However, this method has the problem of high computational costs. In this paper we propose a new learning scheme to train single-hidden layer feedforward neural networks (SLFNs) with implicitly extended training set. The training set is extended by corrupting the hidden layer outputs of training samples with noise from exponential family distribution. When the number of corruption approaches infinity, in objective function explicitly generated samples can be expressed as the form of expectation. Our method, called marginalized corrupted hidden layer (MCHL), trains SLFNs by minimizing the loss function expected values under the corrupting distribution. In this way MCHL is trained with infinite samples. Experimental results on multiple data sets show that MCHL can be trained efficiently, and generalizes better to test data.
引用
收藏
页码:506 / 514
页数:9
相关论文
共 50 条
  • [21] Estimating the number of Hidden Nodes of the Single-hidden-layer Feedforward Neural Networks
    Cai, Guang-Wei
    Fang, Zhi
    Chen, Yue-Feng
    2019 15TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND SECURITY (CIS 2019), 2019, : 172 - 176
  • [22] Hidden neural networks
    Krogh, A
    Riis, SK
    NEURAL COMPUTATION, 1999, 11 (02) : 541 - 563
  • [23] On the approximation by single hidden layer feedforward neural networks with fixed weights
    Guliyev, Namig J.
    Ismailov, Vugar E.
    NEURAL NETWORKS, 2018, 98 : 296 - 304
  • [24] Reconstruction of visual sensory space on the hidden layer in layered neural networks
    Shibata, K
    Ito, K
    ICONIP'98: THE FIFTH INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING JOINTLY WITH JNNS'98: THE 1998 ANNUAL CONFERENCE OF THE JAPANESE NEURAL NETWORK SOCIETY - PROCEEDINGS, VOLS 1-3, 1998, : 405 - 408
  • [25] The essential approximation order for neural networks with trigonometric hidden layer units
    Ding, Chunmei
    Cao, Feilong
    Xu, Zongben
    ADVANCES IN NEURAL NETWORKS - ISNN 2006, PT 1, 2006, 3971 : 72 - 79
  • [26] Approximation error of single hidden layer neural networks with fixed weights
    Ismailov, Vugar E.
    INFORMATION PROCESSING LETTERS, 2024, 185
  • [27] Comments on "Classification ability of single hidden layer feedforward neural networks"
    Sandberg, IW
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2001, 12 (03): : 642 - 643
  • [28] Expressive Numbers of Two or More Hidden Layer ReLU Neural Networks
    Inoue, Kenta
    2019 SEVENTH INTERNATIONAL SYMPOSIUM ON COMPUTING AND NETWORKING WORKSHOPS (CANDARW 2019), 2019, : 129 - 135
  • [29] Image Stitching with single-hidden layer feedforward Neural Networks
    Yan, Min
    Yin, Qian
    Guo, Ping
    2016 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2016, : 4162 - 4169
  • [30] New error function for single hidden layer feedforward neural networks
    Li, Leong Kwan
    Lee, Richard Chak Hong
    CISP 2008: FIRST INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING, VOL 5, PROCEEDINGS, 2008, : 752 - 755