L1-Norm Robust Regularized Extreme Learning Machine with Asymmetric C-Loss for Regression

被引:1
|
作者
Wu, Qing [1 ,2 ]
Wang, Fan [1 ]
An, Yu [3 ]
Li, Ke [1 ]
机构
[1] Xian Univ Posts & Telecommun, Sch Automation, Xian 710121, Peoples R China
[2] Xian Key Lab Adv Control & Intelligent Proc, Xian 710121, Peoples R China
[3] Xian Univ Posts & Telecommun, Sch Elect Engn, Xian 710121, Peoples R China
基金
中国国家自然科学基金;
关键词
extreme learning machine; asymmetric least square loss; expectile; correntropy; robustness;
D O I
10.3390/axioms12020204
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
Extreme learning machines (ELMs) have recently attracted significant attention due to their fast training speeds and good prediction effect. However, ELMs ignore the inherent distribution of the original samples, and they are prone to overfitting, which fails at achieving good generalization performance. In this paper, based on expectile penalty and correntropy, an asymmetric C-loss function (called AC-loss) is proposed, which is non-convex, bounded, and relatively insensitive to noise. Further, a novel extreme learning machine called L-1 norm robust regularized extreme learning machine with asymmetric C-loss (L-1-ACELM) is presented to handle the overfitting problem. The proposed algorithm benefits from L-1 norm and replaces the square loss function with the AC-loss function. The L-1-ACELM can generate a more compact network with fewer hidden nodes and reduce the impact of noise. To evaluate the effectiveness of the proposed algorithm on noisy datasets, different levels of noise are added in numerical experiments. The results for different types of artificial and benchmark datasets demonstrate that L-1-ACELM achieves better generalization performance compared to other state-of-the-art algorithms, especially when noise exists in the datasets.
引用
收藏
页数:22
相关论文
共 50 条
  • [1] Robust Fisher-Regularized Twin Extreme Learning Machine with Capped L1-Norm for Classification
    Xue, Zhenxia
    Cai, Linchao
    [J]. AXIOMS, 2023, 12 (07)
  • [2] Regression and classification using extreme learning machine based on L1-norm and L2-norm
    Luo, Xiong
    Chang, Xiaohui
    Ban, Xiaojuan
    [J]. NEUROCOMPUTING, 2016, 174 : 179 - 186
  • [3] Robust regularized extreme learning machine with asymmetric Huber loss function
    Deepak Gupta
    Barenya Bikash Hazarika
    Mohanadhas Berlin
    [J]. Neural Computing and Applications, 2020, 32 : 12971 - 12998
  • [4] Robust regularized extreme learning machine with asymmetric Huber loss function
    Gupta, Deepak
    Hazarika, Barenya Bikash
    Berlin, Mohanadhas
    [J]. NEURAL COMPUTING & APPLICATIONS, 2020, 32 (16): : 12971 - 12998
  • [5] LL-ELM: A regularized extreme learning machine based on L1-norm and Liu estimator
    Yildirim, Hasan
    Revan Ozkale, M.
    [J]. NEURAL COMPUTING & APPLICATIONS, 2021, 33 (16): : 10469 - 10484
  • [6] L2,1-norm robust regularized extreme learning machine for regression using CCCP method
    Wu Qing
    Wang Fan
    Fan Jiulun
    Hou Jing
    [J]. The Journal of China Universities of Posts and Telecommunications, 2023, 30 (02) : 61 - 72
  • [7] L2,1-norm robust regularized extreme learning machine for regression using CCCP method
    Wu Q.
    Wang F.
    Fan J.
    Hou J.
    [J]. Journal of China Universities of Posts and Telecommunications, 2023, 30 (02): : 61 - 72
  • [8] A Novel Regularized Extreme Learning Machine Based on L1-Norm and L2-Norm: a Sparsity Solution Alternative to Lasso and Elastic Net
    Yildirim, Hasan
    Ozkale, M. Revan
    [J]. COGNITIVE COMPUTATION, 2024, 16 (02) : 641 - 653
  • [9] Tensor-Based Type-2 Extreme Learning Machine with L1-norm and Liu Regression
    Li, Jie
    Zhao, Guoliang
    [J]. 2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 6327 - 6333
  • [10] Capped L1-norm distance metric-based fast robust twin extreme learning machine
    Jun MA
    [J]. Applied Intelligence, 2020, 50 : 3775 - 3787