Learning for Control: L1-Error Bounds for Kernel-Based Regression

被引:0
|
作者
Bisiacco, Mauro [1 ]
Pillonetto, Gianluigi [1 ]
机构
[1] University of Padova, Department of Information Engineering, Padova,35122, Italy
关键词
Error analysis - Hilbert spaces - Learning algorithms - Linear transformations - Regression analysis - Signal processing - Uncertainty analysis - Vector spaces;
D O I
10.1109/TAC.2024.3372882
中图分类号
学科分类号
摘要
We consider functional regression models with noisy outputs resulting from linear transformations. In the setting of regularization theory in reproducing kernel Hilbert spaces (RKHSs), much work has been devoted to build uncertainty bounds around kernel-based estimates, hence characterizing their convergence rates. Such results are typically formulated using either the average squared loss for the prediction or the RKHS norm. However, in signal processing and in emerging areas, such as learning for control, measuring the estimation error through the L1 norm is often more advantageous. This can, e.g., provide insights on the convergence rate in the Laplace/Fourier domain whose role is crucial in the analysis of dynamical systems. For this reason, we consider all the RKHSs H associated with Lebesgue measurable positive-definite kernels, which induce subspaces of L1, also known as stable RKHSs in the literature. The inclusion H ⊂ L1 is then characterized. This permits to convert all the error bounds, which depend on the RKHS norm in terms of the L1 norm. We also show that our result is optimal: there does not exist any better reformulation of the bounds in L1 than the one presented here. © 1963-2012 IEEE.
引用
收藏
页码:6530 / 6545
相关论文
共 50 条
  • [21] Kernel-Based Reinforcement Learning
    Hu, Guanghua
    Qiu, Yuqin
    Xiang, Liming
    [J]. INTELLIGENT COMPUTING, PART I: INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING, ICIC 2006, PART I, 2006, 4113 : 757 - 766
  • [22] Kernel-Based Reinforcement Learning
    Dirk Ormoneit
    Śaunak Sen
    [J]. Machine Learning, 2002, 49 : 161 - 178
  • [23] Kernel-based reinforcement learning
    Ormoneit, D
    Sen, S
    [J]. MACHINE LEARNING, 2002, 49 (2-3) : 161 - 178
  • [24] Gradient descent for robust kernel-based regression
    Guo, Zheng-Chu
    Hu, Ting
    Shi, Lei
    [J]. INVERSE PROBLEMS, 2018, 34 (06)
  • [25] Kernel-based online regression with canal loss
    Liang, Xijun
    Zhang, Zhipeng
    Song, Yunquan
    Jian, Ling
    [J]. EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2022, 297 (01) : 268 - 279
  • [26] Random multi-scale kernel-based Bayesian distribution regression learning
    Dong, Xue-Mei
    Gu, Yin-He
    Shi, Jian
    Xiang, Kun
    [J]. KNOWLEDGE-BASED SYSTEMS, 2020, 201
  • [27] Asymptotic normality of the L1-error of a boundary estimator
    Geffroy, J
    Girard, S
    Jacob, P
    [J]. JOURNAL OF NONPARAMETRIC STATISTICS, 2006, 18 (01) : 21 - 31
  • [28] Learning rates for the risk of kernel-based quantile regression estimators in additive models
    Christmann, Andreas
    Zhou, Ding-Xuan
    [J]. ANALYSIS AND APPLICATIONS, 2016, 14 (03) : 449 - 477
  • [29] Asymptotic normality of two symmetry test statistics based on the L1-error
    Berrahou, Noureddine
    Louani, Djamal
    [J]. JOURNAL OF STATISTICAL PLANNING AND INFERENCE, 2010, 140 (07) : 1788 - 1804
  • [30] Kernel-based learning of orthogonal functions
    Scampicchio, Anna
    Pillonetto, Gianluigi
    Bisiacco, Mauro
    [J]. IFAC PAPERSONLINE, 2020, 53 (02): : 2305 - 2310