On Theoretical Analysis of Single Hidden Layer Feedforward Neural Networks with Relu Activations

被引:0
|
作者
Shen, Guorui [1 ]
Yuan, Ye [1 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Artificial Intelligence & Automat, Wuhan, Peoples R China
关键词
extreme learning machine; single hidden layer feedforward neural networks; rectifier linear unit; EXTREME LEARNING-MACHINE; GAME; GO;
D O I
10.1109/yac.2019.8787645
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
During past decades, extreme learning machine has acquired a lot of popularity due to its fast training speed and easy-implementation. Though extreme learning machine has been proved valid when using an infinitely differentiable function like sigmoid as activation, existed extreme learning machine theory pays a little attention to consider non-differentiable function as activation. However, other non-differentiable activation function, rectifier linear unit (Relu) in particular, has been demonstrated to enable better training of deep neural networks, compared to previously wide-used sigmoid activation. And today Relu is the most popular choice for deep neural networks. Therefore in this note, we consider extreme learning machine that adopts non-smooth function as activation, proposing that a Relu activated single hidden layer feedforward neural network (SLFN) is capable of fitting given training data points with zero error under the condition that sufficient hidden neurons are provided at the hidden layer. The proof relies on a slightly different assumption from the original one but remains easy to satisfy. Besides, we also found that the squared lilting error function is monotonically non-increasing with respect to the number of hidden nodes, which in turn means a much wider SLFN owns much expressive capacity.
引用
收藏
页码:706 / 709
页数:4
相关论文
共 50 条
  • [31] A new robust training algorithm for a class of single-hidden layer feedforward neural networks
    Man, Zhihong
    Lee, Kevin
    Wang, Dianhui
    Cao, Zhenwei
    Miao, Chunyan
    [J]. NEUROCOMPUTING, 2011, 74 (16) : 2491 - 2501
  • [32] CONVERGENCE-RATES FOR SINGLE HIDDEN LAYER FEEDFORWARD NETWORKS
    MCCAFFREY, DF
    GALLANT, AR
    [J]. NEURAL NETWORKS, 1994, 7 (01) : 147 - 158
  • [33] A new deep neural network based on a stack of single-hidden-layer feedforward neural networks with randomly fixed hidden neurons
    Hu, Junying
    Zhang, Jiangshe
    Zhang, Chunxia
    Wang, Juan
    [J]. NEUROCOMPUTING, 2016, 171 : 63 - 72
  • [34] HIDDEN MINIMA IN TWO-LAYER RELU NETWORKS
    Arjevani, Yossi
    [J]. arXiv, 2023,
  • [35] On the solution of the parity problem by a single hidden layer feedforward neural network
    Dept. of Info. Syst. and Comp. Sci., National University of Singapore, Kent Ridge, Singapore 119260, Singapore
    [J]. NEUROCOMPUTING, 3 (225-235):
  • [36] Path-Normalized Optimization of Recurrent Neural Networks with ReLU Activations
    Neyshabur, Behnam
    Wu, Yuhuai
    Salakhutdinov, Ruslan
    Srebro, Nathan
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [37] On the solution of the parity problem by a single hidden layer feedforward neural network
    Setiono, R
    [J]. NEUROCOMPUTING, 1997, 16 (03) : 225 - 235
  • [38] Online training for single hidden-layer feedforward neural networks using RLS-ELM
    Hieu Trung Huynh
    Won, Yonggwan
    [J]. IEEE INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE IN ROBOTICS AND AUTOMATION, 2009, : 469 - 473
  • [39] Nonlinear system identification using BPWA based single hidden-layer feedforward neural networks
    School of Information Science and Technology, Southwest Jiaotong University, Chengdu 610031, China
    不详
    [J]. Tiedao Xuebao, 2007, 5 (48-53):
  • [40] Hematocrit Estimation from Compact Single Hidden Layer Feedforward Neural Networks Trained by Evolutionary Algorithm
    Huynh, Hieu Trung
    Won, Yonggwan
    [J]. 2008 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION, VOLS 1-8, 2008, : 2962 - 2966