REGULARIZED KERNEL NETWORKS WITH CONVEX P-LIPSCHITZ LOSS

被引:0
|
作者
Pan, Weishan [1 ]
Sun, Hongwei [1 ]
机构
[1] Univ Jinan, Sch Math Sci, Jinan 250022, Peoples R China
基金
中国国家自然科学基金;
关键词
Learning theory; regularized kernel network; leave one out analysis; error bound; learning rate; CONDITIONAL QUANTILES; LEARNING RATES; REGRESSION;
D O I
10.3934/mfc.2023049
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
.We propose a loss function class called convex p-Lipschitz loss functions, which includes hinge loss, pinball loss and least square loss and so on. For regularized kernel networks and bias corrected regularized kernel networks with general convex pLipschitz loss, we establish the error analysis frameworks by employing the leave one out technique [16]. Under a mild source condition which describes how the minimizer f* of the generalization error can be approximated by the hypothesis space HK, satisfying error bounds and learning rates are deduced. Moreover, our proofs also show that bias correction method can indeed decrease the learning error.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Global optimality condition and fixed point continuation algorithm for non-Lipschitz ℓp regularized matrix minimization
    Dingtao Peng
    Naihua Xiu
    Jian Yu
    Science China Mathematics, 2018, 61 : 1139 - 1152
  • [22] P-norm Regularized SVM Classifier by Non-convex Conjugate Gradient Algorithm
    Zuo Xin
    Huang Hailong
    Li Haien
    Liu Jianwei
    2013 25TH CHINESE CONTROL AND DECISION CONFERENCE (CCDC), 2013, : 2685 - 2690
  • [23] Global optimality condition and fixed point continuation algorithm for non-Lipschitz ?p regularized matrix minimization
    Dingtao Peng
    Naihua Xiu
    Jian Yu
    Science China Mathematics, 2018, 61 (06) : 1139 - 1152
  • [24] A Distributed Learning Method for l1-Regularized Kernel Machine over Wireless Sensor Networks
    Ji, Xinrong
    Hou, Cuiqin
    Hou, Yibin
    Gao, Fang
    Wang, Shulong
    SENSORS, 2016, 16 (07)
  • [25] Two-stage convex relaxation approach to low-rank and sparsity regularized least squares loss
    Le Han
    Shujun Bi
    Journal of Global Optimization, 2018, 70 : 71 - 97
  • [26] Robust regularized extreme learning machine for regression with non-convex loss function via DC program
    Wang, Kuaini
    Pei, Huimin
    Cao, Jinde
    Zhong, Ping
    JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS, 2020, 357 (11): : 7069 - 7091
  • [27] Two-stage convex relaxation approach to low-rank and sparsity regularized least squares loss
    Han, Le
    Bi, Shujun
    JOURNAL OF GLOBAL OPTIMIZATION, 2018, 70 (01) : 71 - 97
  • [28] USE OF KERNEL DEEP CONVEX NETWORKS AND END-TO-END LEARNING FOR SPOKEN LANGUAGE UNDERSTANDING
    Deng, Li
    Tur, Gokhan
    He, Xiaodong
    Hakkani-Tur, Dilek
    2012 IEEE WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGY (SLT 2012), 2012, : 210 - 215
  • [29] Robust Learning With Kernel Mean p-Power Error Loss
    Chen, Badong
    Xing, Lei
    Wang, Xin
    Qin, Jing
    Zheng, Nanning
    IEEE TRANSACTIONS ON CYBERNETICS, 2018, 48 (07) : 2101 - 2113
  • [30] A Novel Method for Sea-Land Clutter Separation Using Regularized Randomized and Kernel Ridge Neural Networks
    Zhang, Le
    Thiyagalingam, Jeyan
    Xue, Anke
    Xu, Shuwen
    SENSORS, 2020, 20 (22) : 1 - 21