L2,1-norm robust regularized extreme learning machine for regression using CCCP method

被引:0
|
作者
Wu Qing [1 ]
Wang Fan [1 ]
Fan Jiulun [2 ]
Hou Jing [3 ]
机构
[1] School of Automation,Xi'an University of Posts and Telecommunications
[2] School of Telecommunication and Information Engineering & School of Artificial Intelligence,Xi'an University of Posts and Telecommunications
[3] School of Humanities and Foreign Languages,Xi'an University of Posts and Telecommunications
基金
中国国家自然科学基金;
关键词
D O I
10.19682/j.cnki.1005-8885.2023.0004
中图分类号
TP181 [自动推理、机器学习]; O212.1 [一般数理统计];
学科分类号
020208 ; 070103 ; 0714 ; 081104 ; 0812 ; 0835 ; 1405 ;
摘要
As a way of training a single hidden layer feedforward network(SLFN),extreme learning machine(ELM) is rapidly becoming popular due to its efficiency. However, ELM tends to overfitting, which makes the model sensitive to noise and outliers. To solve this problem, L2,1-norm is introduced to ELM and an L2,1-norm robust regularized ELM(L2,1-RRELM) was proposed. L2,1-RRELM gives constant penalties to outliers to reduce their adverse effects by replacing least square loss function with a non-convex loss function. In light of the non-convex feature of L2,1-RRELM, the concave-convex procedure(CCCP) is applied to solve its model. The convergence of L2,1-RRELM is also given to show its robustness. In order to further verify the effectiveness of L2,1-RRELM, it is compared with the three popular extreme learning algorithms based on the artificial dataset and University of California Irvine(UCI) datasets. And each algorithm in different noise environments is tested with two evaluation criterions root mean square error(RMSE) and fitness. The results of the simulation indicate that L2,1-RRELM has smaller RMSE and greater fitness under different noise settings. Numerical analysis shows that L2,1-RRELM has better generalization performance, stronger robustness, and higher anti-noise ability and fitness.
引用
下载
收藏
页码:61 / 72
页数:12
相关论文
共 50 条
  • [41] Robust Neighborhood Preserving Projection by Nuclear/L2,1-Norm Regularization for Image Feature Extraction
    Zhang, Zhao
    Li, Fanzhang
    Zhao, Mingbo
    Zhang, Li
    Yan, Shuicheng
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2017, 26 (04) : 1607 - 1622
  • [42] Avoiding Optimal Mean l2,1-Norm Maximization-Based Robust PCA for Reconstruction
    Luo, Minnan
    Nie, Feiping
    Chang, Xiaojun
    Yang, Yi
    Hauptmann, Alexander G.
    Zheng, Qinghua
    NEURAL COMPUTATION, 2017, 29 (04) : 1124 - 1150
  • [43] A unified robust framework for multi-view feature extraction with L2,1-norm constraint
    Zhang, Jinxin
    Liu, Liming
    Zhen, Ling
    Jing, Ling
    NEURAL NETWORKS, 2020, 128 : 126 - 141
  • [44] Canonical Correlation Analysis With L2,1-Norm for Multiview Data Representation
    Xu, Meixiang
    Zhu, Zhenfeng
    Zhang, Xingxing
    Zhao, Yao
    Li, Xuelong
    IEEE TRANSACTIONS ON CYBERNETICS, 2020, 50 (11) : 4772 - 4782
  • [45] Underdetermined Wideband Source Localization via Sparse Bayesian Learning modeling l2,1-norm
    Hu, Nan
    Chen, Tingting
    2018 10TH INTERNATIONAL CONFERENCE ON COMMUNICATIONS, CIRCUITS AND SYSTEMS (ICCCAS 2018), 2018, : 217 - 221
  • [46] Robust Feature Selection Method Based on Joint L2,1 Norm Minimization for Sparse Regression
    Yang, Libo
    Zhu, Dawei
    Liu, Xuemei
    Cui, Pei
    ELECTRONICS, 2023, 12 (21)
  • [47] Sparse Neighborhood Preserving Embedding via L2,1-Norm Minimization
    Zhou, Youpeng
    Ding, Yulin
    Luo, Yifu
    Ren, Haoliang
    PROCEEDINGS OF 2016 9TH INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN (ISCID), VOL 2, 2016, : 378 - 382
  • [48] Divergence-Based Locally Weighted Ensemble Clustering with Dictionary Learning and L2,1-Norm
    Xu, Jiaxuan
    Wu, Jiang
    Li, Taiyong
    Nan, Yang
    ENTROPY, 2022, 24 (10)
  • [49] Cost-sensitive feature selection via the l2,1-norm
    Zhao, Hong
    Yu, Shenglong
    INTERNATIONAL JOURNAL OF APPROXIMATE REASONING, 2019, 104 : 25 - 37
  • [50] Discriminant Analysis via Joint Euler Transform and l2,1-Norm
    Liao, Shuangli
    Gao, Quanxue
    Yang, Zhaohua
    Chen, Fang
    Nie, Feiping
    Han, Jungong
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (11) : 5668 - 5682