A New Learning Algorithm with General Loss for Neural Networks with Random Weights

被引:0
|
作者
Yao, Yunfei [1 ]
Li, Junfan [1 ]
Liao, Shizhong [1 ]
机构
[1] Tianjin Univ, Coll Intelligence & Comp, Tianjin 300350, Peoples R China
基金
中国国家自然科学基金;
关键词
NNRWs; universal approximation property; strictly convex loss; strongly convex loss; convergence rate;
D O I
10.1109/ICTAI50040.2020.00047
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural networks with random weights (NNRWs) which randomly assign weights to new hidden nodes, provide a new and promising stochastic approach for the research of neural networks, and have been proved to enjoy the universal approximation property. However, the design of existing learning algorithms for NNRWs is only based on the square loss function, which hinders the development of NNRWs. For strictly convex and strongly convex loss functions, it is unclear how to design learning algorithms, and what the convergence rates are. In this paper, we answer the qu estions affirmatively. First, we propose a new supervisory mechanism for constructing NNRWs, and prove the universal approximation property. Then we design a new learning algorithm and analyze the convergence rates of the algorithm, based on which the time complexities are also analyzed. To be specific, the proposed algorithm enjoys sublinear convergence for smooth and strictly convex loss functions, and linear convergence for smooth and strongly convex loss functions. Finally, experimental results on several real-world datasets verify the convergence rates of the algorithm with different loss functions.
引用
收藏
页码:244 / 248
页数:5
相关论文
共 50 条
  • [21] Artificial Neural Networks with Random Weights for Incomplete Datasets
    Diego P. P. Mesquita
    João Paulo P. Gomes
    Leonardo R. Rodrigues
    [J]. Neural Processing Letters, 2019, 50 : 2345 - 2372
  • [22] Learning Neural Networks without Lazy Weights
    Lee, Dong-gi
    Cho, Junhee
    Kim, Myungjun
    Park, Sunghong
    Shin, Hyunjung
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING (IEEE BIGCOMP 2022), 2022, : 82 - 87
  • [23] Hardness of Learning Neural Networks with Natural Weights
    Daniely, Amit
    Vardi, Gal
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [24] A new sequential learning algorithm for RBF neural networks
    YANG Ge1
    2. Department of Power Engineering
    [J]. Science China Technological Sciences, 2004, (04) : 447 - 460
  • [25] A new sequential learning algorithm for RBF neural networks
    Ge Yang
    Jianhong Lü
    Zhiyuan Liu
    [J]. Science in China Series E: Technological Sciences, 2004, 47 : 447 - 460
  • [26] A New Learning Algorithm for Adaptive Spiking Neural Networks
    Wang, J.
    Belatreche, A.
    Maguire, L. P.
    McGinnity, T. M.
    [J]. NEURAL INFORMATION PROCESSING, PT I, 2011, 7062 : 461 - +
  • [27] A new sequential learning algorithm for RBF neural networks
    Yang, G
    Lü, JH
    Liu, ZY
    [J]. SCIENCE IN CHINA SERIES E-ENGINEERING & MATERIALS SCIENCE, 2004, 47 (04): : 447 - 460
  • [28] A NEW ALGORITHM FOR LEARNING REPRESENTATIONS IN BOOLEAN NEURAL NETWORKS
    BISWAS, NN
    KUMAR, R
    [J]. CURRENT SCIENCE, 1990, 59 (12): : 595 - 600
  • [29] A New Improved Learning Algorithm for Convolutional Neural Networks
    Yang, Jie
    Zhao, Junhong
    Lu, Lu
    Pan, Tingting
    Jubair, Sidra
    [J]. PROCESSES, 2020, 8 (03)
  • [30] Contrasting Advantages of Learning With Random Weights and Backpropagation in Non-Volatile Memory Neural Networks
    Bennett, Christopher H.
    Parmar, Vivek
    Calvet, Laurie E.
    Klein, Jacques-Olivier
    Suri, Manan
    Marinella, Matthew J.
    Querlioz, And Damien
    [J]. IEEE ACCESS, 2019, 7 : 73938 - 73953