On Theoretical Analysis of Single Hidden Layer Feedforward Neural Networks with Relu Activations

被引:0
|
作者
Shen, Guorui [1 ]
Yuan, Ye [1 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Artificial Intelligence & Automat, Wuhan, Peoples R China
关键词
extreme learning machine; single hidden layer feedforward neural networks; rectifier linear unit; EXTREME LEARNING-MACHINE; GAME; GO;
D O I
10.1109/yac.2019.8787645
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
During past decades, extreme learning machine has acquired a lot of popularity due to its fast training speed and easy-implementation. Though extreme learning machine has been proved valid when using an infinitely differentiable function like sigmoid as activation, existed extreme learning machine theory pays a little attention to consider non-differentiable function as activation. However, other non-differentiable activation function, rectifier linear unit (Relu) in particular, has been demonstrated to enable better training of deep neural networks, compared to previously wide-used sigmoid activation. And today Relu is the most popular choice for deep neural networks. Therefore in this note, we consider extreme learning machine that adopts non-smooth function as activation, proposing that a Relu activated single hidden layer feedforward neural network (SLFN) is capable of fitting given training data points with zero error under the condition that sufficient hidden neurons are provided at the hidden layer. The proof relies on a slightly different assumption from the original one but remains easy to satisfy. Besides, we also found that the squared lilting error function is monotonically non-increasing with respect to the number of hidden nodes, which in turn means a much wider SLFN owns much expressive capacity.
引用
收藏
页码:706 / 709
页数:4
相关论文
共 50 条
  • [1] Modular Expansion of the Hidden Layer in Single Layer Feedforward Neural Networks
    Tissera, Migel D.
    McDonnell, Mark D.
    [J]. 2016 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2016, : 2939 - 2945
  • [2] Classification ability of single hidden layer feedforward neural networks
    Huang, GB
    Chen, YQ
    Babri, HA
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2000, 11 (03): : 799 - 801
  • [3] Stability Analysis and Controller Synthesis Using Single-Hidden-Layer ReLU Neural Networks
    Samanipour, Pouya
    Poonawala, Hasan A.
    [J]. IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2024, 69 (01) : 202 - 213
  • [4] Collapsing multiple hidden layers in feedforward neural networks to a single hidden layer
    Blue, JL
    Hall, LO
    [J]. APPLICATIONS AND SCIENCE OF ARTIFICIAL NEURAL NETWORKS II, 1996, 2760 : 44 - 52
  • [5] A Global Universality of Two-Layer Neural Networks with ReLU Activations
    Hatano, Naoya
    Ikeda, Masahiro
    Ishikawa, Isao
    Sawano, Yoshihiro
    [J]. JOURNAL OF FUNCTION SPACES, 2021, 2021
  • [6] New error function for single hidden layer feedforward neural networks
    Li, Leong Kwan
    Lee, Richard Chak Hong
    [J]. CISP 2008: FIRST INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING, VOL 5, PROCEEDINGS, 2008, : 752 - 755
  • [7] A new optimization algorithm for single hidden layer feedforward neural networks
    Li, Leong Kwan
    Shao, Sally
    Yiu, Ka-Fai Cedric
    [J]. APPLIED SOFT COMPUTING, 2013, 13 (05) : 2857 - 2862
  • [8] Comments on "Classification ability of single hidden layer feedforward neural networks"
    Sandberg, IW
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2001, 12 (03): : 642 - 643
  • [9] Image Stitching with single-hidden layer feedforward Neural Networks
    Yan, Min
    Yin, Qian
    Guo, Ping
    [J]. 2016 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2016, : 4162 - 4169
  • [10] On the approximation by single hidden layer feedforward neural networks with fixed weights
    Guliyev, Namig J.
    Ismailov, Vugar E.
    [J]. NEURAL NETWORKS, 2018, 98 : 296 - 304