Robust regression framework with asymmetrically analogous to correntropy-induced loss

被引:0
|
作者
Yang, Liming [1 ]
Ding, Guangsheng [1 ]
Yuan, Chao [2 ]
Zhang, Min [1 ]
机构
[1] China Agr Univ, Coll Sci, Beijing 100083, Peoples R China
[2] China Agr Univ, Coll Informat & Elect Engn, Beijing, Peoples R China
关键词
Robustness; Asymmetry least square loss; Expectile; Nonconvexity; Correntropy; Regression; CCCP; SUPPORT VECTOR MACHINE; CLASSIFICATION; QUANTILES;
D O I
10.1016/j.knosys.2019.105211
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This work proposes a robust loss function based on expectile penalty (named as rescaled expectile loss, RE-loss), which includes and generalizes the existing loss functions. Then some important properties of RE-loss are demonstrated such as asymmetry, nonconvexity, smoothness, boundedness and asymptotic approximation behaviors. From the viewpoints of correntropy, we analyze that the proposed RE-loss can be viewed as a correntropy-induced loss by a reproducing piecewise kernel. Furthermore, a sparse version of RE-loss (called SRE-loss function) is developed to improve sparsity by introducing a epsilon-insensitive zone. Following that, two robust regression frameworks are proposed with the proposed loss functions. However, the non-convexity of the proposed losses makes the problems difficult to optimize. We apply concave-convex procedure (CCCP) and dual theory to solve the problems effectively. The resulting algorithms converge linearly. To validate the proposed methods, we carry out numerical experiments in different scale datasets with different levels of noises and, outliers, respectively. In three databases including artificial database, benchmark database and a practical application database, experimental results demonstrate that the proposed methods achieve better generalization than the traditional regression methods in most cases,especially when noise and outlier distribution are imbalance. (C) 2019 Elsevier B.V. All rights reserved.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Correntropy-Induced Wasserstein GCN: Learning Graph Embedding via Domain Adaptation
    Wang, Wei
    Zhang, Gaowei
    Han, Hongyong
    Zhang, Chi
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 3980 - 3993
  • [22] Robust echo state networks based on correntropy induced loss function
    Guo, Yu
    Wang, Fei
    Chen, Badong
    Xin, Jingmin
    [J]. NEUROCOMPUTING, 2017, 267 : 295 - 303
  • [23] A robust classification framework with mixture correntropy
    Wang, Yidan
    Yang, Liming
    Ren, Qiangqiang
    [J]. INFORMATION SCIENCES, 2019, 491 : 306 - 318
  • [24] Sparse Least Logarithmic Absolute Difference Algorithm with Correntropy-Induced Metric Penalty
    Ma, Wentao
    Chen, Badong
    Zhao, Haiquan
    Gui, Guan
    Duan, Jiandong
    Principe, Jose C.
    [J]. CIRCUITS SYSTEMS AND SIGNAL PROCESSING, 2016, 35 (03) : 1077 - 1089
  • [25] Sparse Least Logarithmic Absolute Difference Algorithm with Correntropy-Induced Metric Penalty
    Wentao Ma
    Badong Chen
    Haiquan Zhao
    Guan Gui
    Jiandong Duan
    Jose C. Principe
    [J]. Circuits, Systems, and Signal Processing, 2016, 35 : 1077 - 1089
  • [26] A Regularized Correntropy Framework for Robust Pattern Recognition
    He, Ran
    Zheng, Wei-Shi
    Hu, Bao-Gang
    Kong, Xiang-Wei
    [J]. NEURAL COMPUTATION, 2011, 23 (08) : 2074 - 2100
  • [27] Partial maximum correntropy regression for robust electrocorticography decoding
    Li, Yuanhao
    Chen, Badong
    Wang, Gang
    Yoshimura, Natsue
    Koike, Yasuharu
    [J]. FRONTIERS IN NEUROSCIENCE, 2023, 17
  • [28] Robust regression under the general framework of bounded loss functions
    Fu, Saiji
    Tian, Yingjie
    Tang, Long
    [J]. EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2023, 310 (03) : 1325 - 1339
  • [29] Robustness analysis of a maximum correntropy framework for linear regression
    Bako, Laurent
    [J]. AUTOMATICA, 2018, 87 : 218 - 225
  • [30] Distributed robust regression with correntropy losses and regularization kernel networks
    Hu, Ting
    Guo, Renjie
    [J]. ANALYSIS AND APPLICATIONS, 2024, 22 (04)