共 50 条
Mixture Correntropy-Based Kernel Extreme Learning Machines
被引:28
|作者:
Zheng, Yunfei
[1
]
Chen, Badong
[1
]
Wang, Shiyuan
[2
]
Wang, Weiqun
[3
]
Qin, Wei
[4
]
机构:
[1] Xi An Jiao Tong Univ, Inst Artificial Intelligence & Robot, Xian 710049, Peoples R China
[2] Southwest Univ, Coll Elect & Informat Engn, Chongqing 400715, Peoples R China
[3] Chinese Acad Sci, State Key Lab Management & Control Complex Syst, Inst Automat, Beijing 100190, Peoples R China
[4] Xidian Univ, Sch Life Sci & Technol, Xian 710071, Peoples R China
基金:
中国国家自然科学基金;
关键词:
Kernel;
Optimization;
Learning systems;
Robustness;
Support vector machines;
Mean square error methods;
Extreme learning machine (ELM);
kernel method;
mixture correntropy;
online learning;
FIXED-POINT ALGORITHM;
UNIVERSAL APPROXIMATION;
CONVERGENCE;
REGRESSION;
NETWORKS;
D O I:
10.1109/TNNLS.2020.3029198
中图分类号:
TP18 [人工智能理论];
学科分类号:
081104 ;
0812 ;
0835 ;
1405 ;
摘要:
Kernel-based extreme learning machine (KELM), as a natural extension of ELM to kernel learning, has achieved outstanding performance in addressing various regression and classification problems. Compared with the basic ELM, KELM has a better generalization ability owing to no needs of the number of hidden nodes given beforehand and random projection mechanism. Since KELM is derived under the minimum mean square error (MMSE) criterion for the Gaussian assumption of noise, its performance may deteriorate under the non-Gaussian cases, seriously. To improve the robustness of KELM, this article proposes a mixture correntropy-based KELM (MC-KELM), which adopts the recently proposed maximum mixture correntropy criterion as the optimization criterion, instead of using the MMSE criterion. In addition, an online sequential version of MC-KELM (MCOS-KELM) is developed to deal with the case that the data arrive sequentially (one-by-one or chunk-by-chunk). Experimental results on regression and classification data sets are reported to validate the performance superiorities of the new methods.
引用
收藏
页码:811 / 825
页数:15
相关论文