L1-Norm Robust Regularized Extreme Learning Machine with Asymmetric C-Loss for Regression

被引:1
|
作者
Wu, Qing [1 ,2 ]
Wang, Fan [1 ]
An, Yu [3 ]
Li, Ke [1 ]
机构
[1] Xian Univ Posts & Telecommun, Sch Automation, Xian 710121, Peoples R China
[2] Xian Key Lab Adv Control & Intelligent Proc, Xian 710121, Peoples R China
[3] Xian Univ Posts & Telecommun, Sch Elect Engn, Xian 710121, Peoples R China
基金
中国国家自然科学基金;
关键词
extreme learning machine; asymmetric least square loss; expectile; correntropy; robustness;
D O I
10.3390/axioms12020204
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
Extreme learning machines (ELMs) have recently attracted significant attention due to their fast training speeds and good prediction effect. However, ELMs ignore the inherent distribution of the original samples, and they are prone to overfitting, which fails at achieving good generalization performance. In this paper, based on expectile penalty and correntropy, an asymmetric C-loss function (called AC-loss) is proposed, which is non-convex, bounded, and relatively insensitive to noise. Further, a novel extreme learning machine called L-1 norm robust regularized extreme learning machine with asymmetric C-loss (L-1-ACELM) is presented to handle the overfitting problem. The proposed algorithm benefits from L-1 norm and replaces the square loss function with the AC-loss function. The L-1-ACELM can generate a more compact network with fewer hidden nodes and reduce the impact of noise. To evaluate the effectiveness of the proposed algorithm on noisy datasets, different levels of noise are added in numerical experiments. The results for different types of artificial and benchmark datasets demonstrate that L-1-ACELM achieves better generalization performance compared to other state-of-the-art algorithms, especially when noise exists in the datasets.
引用
下载
收藏
页数:22
相关论文
共 50 条
  • [31] Regularized extreme learning machine for regression with missing data
    Yu, Qi
    Miche, Yoan
    Eirola, Emil
    van Heeswijk, Mark
    Severin, Eric
    Lendasse, Amaury
    NEUROCOMPUTING, 2013, 102 : 45 - 51
  • [32] l1-norm penalised orthogonal forward regression
    Hong, Xia
    Chen, Sheng
    Guo, Yi
    Gao, Junbin
    INTERNATIONAL JOURNAL OF SYSTEMS SCIENCE, 2017, 48 (10) : 2195 - 2201
  • [33] L1-Norm Support Vector Regression in Primal Based on Huber Loss Function
    Puthiyottil, Anagha
    Balasundaram, S.
    Meena, Yogendra
    PROCEEDINGS OF ICETIT 2019: EMERGING TRENDS IN INFORMATION TECHNOLOGY, 2020, 605 : 193 - 203
  • [34] Bayesian L1-norm sparse learning
    Lin, Yuanqing
    Lee, Daniel D.
    2006 IEEE International Conference on Acoustics, Speech and Signal Processing, Vols 1-13, 2006, : 5463 - 5466
  • [35] Robust L1-norm non-parallel proximal support vector machine
    Li, Chun-Na
    Shao, Yuan-Hai
    Deng, Nai-Yang
    OPTIMIZATION, 2016, 65 (01) : 169 - 183
  • [36] Inference robust to outliers with l1-norm penalization
    Beyhum, Jad
    ESAIM-PROBABILITY AND STATISTICS, 2020, 24 : 688 - 702
  • [37] L1-norm loss based twin support vector machine for data recognition
    Peng, Xinjun
    Xu, Dong
    Kong, Lingyan
    Chen, Dongjing
    INFORMATION SCIENCES, 2016, 340 : 86 - 103
  • [38] C-loss based extreme learning machine for estimating power of small-scale turbojet engine
    Zhao, Yong-Ping
    Tan, Jian-Feng
    Wang, Jian-Jun
    Yang, Zhe
    AEROSPACE SCIENCE AND TECHNOLOGY, 2019, 89 : 407 - 419
  • [39] Hierarchical extreme learning machine with L21-norm loss and regularization
    Rui Li
    Xiaodan Wang
    Yafei Song
    Lei Lei
    International Journal of Machine Learning and Cybernetics, 2021, 12 : 1297 - 1310
  • [40] Hierarchical extreme learning machine with L21-norm loss and regularization
    Li, Rui
    Wang, Xiaodan
    Song, Yafei
    Lei, Lei
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2021, 12 (05) : 1297 - 1310