An unsupervised parameter learning model for RVFL neural network

被引:95
|
作者
Zhang, Yongshan [1 ]
Wu, Jia [2 ]
Cai, Zhihua [1 ]
Du, Bo [3 ]
Yu, Philip S. [4 ]
机构
[1] China Univ Geosci, Sch Comp Sci, Wuhan 430074, Hubei, Peoples R China
[2] Macquarie Univ, Dept Comp, Fac Sci & Engn, Sydney, NSW 2109, Australia
[3] Wuhan Univ, Sch Comp Sci, Wuhan 430072, Hubei, Peoples R China
[4] Univ Illinois, Dept Comp Sci, Chicago, IL 60607 USA
关键词
Random vector functional link network; Randomized feedforward neural networks; Autoencoder; l(1)-norm regularization; Pre-trained parameters; Classification applications; FUNCTIONAL-LINK NETWORK; SOFTWARE TOOL; ALGORITHMS; MACHINE; DIMENSIONALITY; REGRESSION; REDUCTION; KEEL;
D O I
10.1016/j.neunet.2019.01.007
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the direct input-output connections, a random vector functional link (RVFL) network is a simple and effective learning algorithm for single-hidden layer feedforward neural networks (SLFNs). RVFL is a universal approximator for continuous functions on compact sets with fast learning property. Owing to its simplicity and effectiveness, RVFL has attracted significant interest in numerous real-world applications. In reality, the performance of RVFL is often challenged by randomly assigned network parameters. In this paper, we propose a novel unsupervised network parameter learning method for RVFL, named sparse pre-trained random vector functional link (SP-RVFL for short) network. The proposed SP-RVFL uses a sparse autoencoder with l(1)-norm regularization to adaptively learn superior network parameters for specific learning tasks. By doing so, the learned network parameters in SP-RVFL are embedded with the valuable information of input data, which alleviate the randomly generated parameter issue and improve the algorithmic performance. Experiments and comparisons on 16 diverse benchmarks from different domains confirm the effectiveness of the proposed SP-RVFL. The corresponding results also demonstrate that RVFL outperforms extreme learning machine (ELM). (C) 2019 Elsevier Ltd. All rights reserved.
引用
收藏
页码:85 / 97
页数:13
相关论文
共 50 条
  • [1] A STOCHASTIC-MODEL OF NEURAL NETWORK FOR UNSUPERVISED LEARNING
    BENAIM, M
    EUROPHYSICS LETTERS, 1992, 19 (03): : 241 - 246
  • [2] Continuous Online Sequence Learning with an Unsupervised Neural Network Model
    Cui, Yuwei
    Ahmad, Subutai
    Hawkins, Jeff
    NEURAL COMPUTATION, 2016, 28 (11) : 2474 - 2504
  • [3] A NEURAL NETWORK MODEL WHICH COMBINES UNSUPERVISED AND SUPERVISED LEARNING
    HSIEH, KR
    CHEN, WT
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 1993, 4 (02): : 357 - 360
  • [4] Centroid neural network for unsupervised competitive learning
    Park, DC
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2000, 11 (02): : 520 - 528
  • [5] An improved parameter learning methodology for RVFL based on pseudoinverse learners
    Xiaoxuan Sun
    Xiaodan Deng
    Qian Yin
    Ping Guo
    Neural Computing and Applications, 2023, 35 : 1803 - 1818
  • [6] An improved parameter learning methodology for RVFL based on pseudoinverse learners
    Sun, Xiaoxuan
    Deng, Xiaodan
    Yin, Qian
    Guo, Ping
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (02): : 1803 - 1818
  • [7] The Forecast of the Temperature in Subway Station based on RVFL Neural Network
    Zhang, Yuan
    Li, Bo
    Wang, Yanping
    Tian, Qing
    2018 11TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING, BIOMEDICAL ENGINEERING AND INFORMATICS (CISP-BMEI 2018), 2018,
  • [8] Unsupervised neural network learning for Blind Sources Separation
    Szu, H
    Hsu, C
    VTH BRAZILIAN SYMPOSIUM ON NEURAL NETWORKS, PROCEEDINGS, 1998, : 30 - 38
  • [9] Development of a neural network algorithm for unsupervised competitive learning
    Park, DC
    1997 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS, VOLS 1-4, 1997, : 1989 - 1993
  • [10] A dynamic growing neural network for supervised or unsupervised learning
    Tian, Daxin
    Liu, Yanheng
    Wei, Da
    WCICA 2006: SIXTH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION, VOLS 1-12, CONFERENCE PROCEEDINGS, 2006, : 2886 - 2890