Parsimonious regularized extreme learning machine based on orthogonal transformation

被引:12
|
作者
Zhao, Yong-Ping [1 ]
Wang, Kang-Kang [1 ]
Li, Ye-Bo [2 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Mech Engn, Nanjing 210094, Jiangsu, Peoples R China
[2] AVIC Aeroengine Control Res Inst, Wuxi 214063, Peoples R China
基金
中国国家自然科学基金;
关键词
Extreme learning machine; Sparseness; Tikhonov regularization; Orthogonal transformation; Condition number; FEEDFORWARD NETWORKS; NEURAL-NETWORKS; CLASSIFICATION; OPTIMIZATION; REGRESSION; IDENTIFICATION; APPROXIMATION; ALGORITHM; SELECTION; ELM;
D O I
10.1016/j.neucom.2014.12.046
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, two parsimonious algorithms were proposed to sparsify extreme learning machine (ELM), i.e., constructive parsimonious ELM (CP-ELM) and destructive parsimonious ELM (DP-ELM). In this paper, the ideas behind CP-ELM and DP-ELM are extended to the regularized ELM (RELM), thus obtaining CP-RELM and DP-RELM. For CP-RELM(DP-RELM), there are two schemes to realize it, viz. CP-RELM-I and CP-RELM-II(DP-RELM-I and DP-RELM-II). Generally speaking, CP-RELM-II(DP-RELM-II) outperforms CP-RELM-I(DP-RELM-I) in terms of parsimoniousness. Under nearly the same generalization, compared with CP-ELM (DP-ELM), CP-RELM-II(DP-RELM-II) usually needs fewer hidden nodes. In addition, different from CP-ELM and DP-ELM, for CP-RELM and DP-RELM the number of candidate hidden nodes may be larger than the number of training samples, which assists the selection of much better hidden nodes for constructing more compact networks. Finally, eleven benchmark data sets divided into two groups are utilized to do experiments and the usefulness of the proposed algorithms is reported. (C) 2014 Elsevier B.V. All rights reserved.
引用
收藏
页码:280 / 296
页数:17
相关论文
共 50 条
  • [31] Orthogonal extreme learning machine for image classification
    Peng, Yong
    Kong, Wanzeng
    Yang, Bing
    [J]. NEUROCOMPUTING, 2017, 266 : 458 - 464
  • [32] Discrimination of β-thalassemia and iron deficiency anemia through extreme learning machine and regularized extreme learning machine based decision support system
    Cil, Betul
    Ayyildiz, Hakan
    Tuncer, Taner
    [J]. MEDICAL HYPOTHESES, 2020, 138
  • [33] Parsimonious kernel extreme learning machine in primal via Cholesky factorization
    Zhao, Yong-Ping
    [J]. NEURAL NETWORKS, 2016, 80 : 95 - 109
  • [34] Temperature prediction model for rotary kiln based JITL with regularized extreme learning machine
    Zhang, Lei
    Wang, Shuwei
    Zhang, Xiaogang
    He, Zhendong
    Huang, Junlei
    Qi, Le
    [J]. 2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 5806 - 5810
  • [35] An Online Network Intrusion Detection Model Based on Improved Regularized Extreme Learning Machine
    Tang, Yanqiang
    Li, Chenghai
    [J]. IEEE ACCESS, 2021, 9 : 94826 - 94844
  • [36] R-ELMNet: Regularized extreme learning machine network
    Zhang, Guanghao
    Li, Yue
    Cui, Dongshun
    Mao, Shangbo
    Huang, Guang-Bin
    [J]. NEURAL NETWORKS, 2020, 130 : 49 - 59
  • [37] Training extreme learning machine via regularized correntropy criterion
    Xing, Hong-Jie
    Wang, Xin-Mei
    [J]. NEURAL COMPUTING & APPLICATIONS, 2013, 23 (7-8): : 1977 - 1986
  • [38] Regularized based implicit Lagrangian twin extreme learning machine in primal for pattern classification
    Umesh Gupta
    Deepak Gupta
    [J]. International Journal of Machine Learning and Cybernetics, 2021, 12 : 1311 - 1342
  • [39] Regularized based implicit Lagrangian twin extreme learning machine in primal for pattern classification
    Gupta, Umesh
    Gupta, Deepak
    [J]. INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2021, 12 (05) : 1311 - 1342
  • [40] Nonlinear internal model control system based on weighted regularized extreme learning machine
    Tang X.-L.
    Zhou J.-L.
    Zhang N.
    Liu Q.
    [J]. Dianzi Keji Daxue Xuebao/Journal of the University of Electronic Science and Technology of China, 2016, 45 (01): : 96 - 101