L1-Norm Robust Regularized Extreme Learning Machine with Asymmetric C-Loss for Regression

被引:1
|
作者
Wu, Qing [1 ,2 ]
Wang, Fan [1 ]
An, Yu [3 ]
Li, Ke [1 ]
机构
[1] Xian Univ Posts & Telecommun, Sch Automation, Xian 710121, Peoples R China
[2] Xian Key Lab Adv Control & Intelligent Proc, Xian 710121, Peoples R China
[3] Xian Univ Posts & Telecommun, Sch Elect Engn, Xian 710121, Peoples R China
基金
中国国家自然科学基金;
关键词
extreme learning machine; asymmetric least square loss; expectile; correntropy; robustness;
D O I
10.3390/axioms12020204
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
Extreme learning machines (ELMs) have recently attracted significant attention due to their fast training speeds and good prediction effect. However, ELMs ignore the inherent distribution of the original samples, and they are prone to overfitting, which fails at achieving good generalization performance. In this paper, based on expectile penalty and correntropy, an asymmetric C-loss function (called AC-loss) is proposed, which is non-convex, bounded, and relatively insensitive to noise. Further, a novel extreme learning machine called L-1 norm robust regularized extreme learning machine with asymmetric C-loss (L-1-ACELM) is presented to handle the overfitting problem. The proposed algorithm benefits from L-1 norm and replaces the square loss function with the AC-loss function. The L-1-ACELM can generate a more compact network with fewer hidden nodes and reduce the impact of noise. To evaluate the effectiveness of the proposed algorithm on noisy datasets, different levels of noise are added in numerical experiments. The results for different types of artificial and benchmark datasets demonstrate that L-1-ACELM achieves better generalization performance compared to other state-of-the-art algorithms, especially when noise exists in the datasets.
引用
收藏
页数:22
相关论文
共 50 条
  • [22] Learning robust principal components from L1-norm maximization
    Dingcheng FENG Feng CHEN Wenli XU Tsinghua National Laboratory for Information Science and TechnologyTsinghua UniversityBeijing China Department of AutomationTsinghua UniversityBeijing China
    JournalofZhejiangUniversity-ScienceC(Computers&Electronics), 2012, 13 (12) : 901 - 908
  • [23] Robust Tensor Analysis With L1-Norm
    Pang, Yanwei
    Li, Xuelong
    Yuan, Yuan
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2010, 20 (02) : 172 - 178
  • [24] ROBUST CAPPED L1-NORM PROJECTION TWIN SUPPORT VECTOR MACHINE
    Yang, Linxi
    Wang, Yan
    LI, Guoquan
    JOURNAL OF INDUSTRIAL AND MANAGEMENT OPTIMIZATION, 2023, 19 (08) : 5797 - 5815
  • [25] THE PLACE OF THE L1-NORM IN ROBUST ESTIMATION
    HUBER, PJ
    COMPUTATIONAL STATISTICS & DATA ANALYSIS, 1987, 5 (04) : 255 - 262
  • [26] l1-norm Nonparallel Support Vector Machine for PU Learning
    Bai, Fusheng
    Yuan, Yongjia
    2018 IEEE 23RD INTERNATIONAL CONFERENCE ON DIGITAL SIGNAL PROCESSING (DSP), 2018,
  • [28] C-Loss-Based Doubly Regularized Extreme Learning Machine
    Wu, Qing
    Fu, Yan-Lin
    Cui, Dong-Shun
    Wang, En
    COGNITIVE COMPUTATION, 2023, 15 (02) : 496 - 519
  • [29] C-Loss-Based Doubly Regularized Extreme Learning Machine
    Qing Wu
    Yan–Lin Fu
    Dong–Shun Cui
    En Wang
    Cognitive Computation, 2023, 15 : 496 - 519
  • [30] Robust regularized extreme learning machine for regression using iteratively reweighted least squares
    Chen, Kai
    Lv, Qi
    Lu, Yao
    Dou, Yong
    NEUROCOMPUTING, 2017, 230 : 345 - 358