Estimates on learning rates for multi-penalty distribution regression

被引:0
|
作者
Yu, Zhan [1 ]
Ho, Daniel W.C. [2 ]
机构
[1] Department of Mathematics, Hong Kong Baptist University, 224 Waterloo Road, Kowloon Tong, Hong Kong
[2] Department of Mathematics, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon Tong, Hong Kong
关键词
Regression analysis;
D O I
暂无
中图分类号
学科分类号
摘要
This paper is concerned with functional learning by utilizing two-stage sampled distribution regression. We study a multi-penalty regularization algorithm for distribution regression in the framework of learning theory. The algorithm aims at regressing to real-valued outputs from probability measures. The theoretical analysis of distribution regression is far from maturity and quite challenging since only second-stage samples are observable in practical settings. In our algorithm, to transform information of distribution samples, we embed the distributions to a reproducing kernel Hilbert space HK associated with Mercer kernel K via mean embedding technique. One of the primary contributions of this work is the introduction of a novel multi-penalty regularization algorithm, which is able to capture more potential features of distribution regression. Optimal learning rates of the algorithm are obtained under mild conditions. The work also derives learning rates for distribution regression in the hard learning scenario fρ∉HK, which has not been explored in the existing literature. Moreover, we propose a new distribution-regression-based distributed learning algorithm to face large-scale data or information challenges arising from distribution data. The optimal learning rates are derived for the distributed learning algorithm. By providing new algorithms and showing their learning rates, the work improves the existing literature in various aspects. © 2023 Elsevier Inc.
引用
收藏
相关论文
共 50 条
  • [21] Learning rates of multi-kernel regularized regression
    Chen, Hong
    Li, Luoqing
    JOURNAL OF STATISTICAL PLANNING AND INFERENCE, 2010, 140 (09) : 2562 - 2568
  • [22] Convergence and quasi-optimality of an adaptive continuous interior multi-penalty finite element method
    Zhu, Lingxue
    Zhou, Zhenhua
    INTERNATIONAL JOURNAL OF COMPUTER MATHEMATICS, 2020, 97 (09) : 1884 - 1907
  • [23] Learning rates of multi-kernel regression by orthogonal greedy algorithm
    Chen, Hong
    Li, Luoqing
    Pan, Zhibin
    JOURNAL OF STATISTICAL PLANNING AND INFERENCE, 2013, 143 (02) : 276 - 282
  • [24] Learning Rates of Regularized Regression With Multiple Gaussian Kernels for Multi-Task Learning
    Xu, Yong-Li
    Li, Xiao-Xing
    Chen, Di-Rong
    Li, Han-Xiong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (11) : 5408 - 5418
  • [25] Multi-penalty conditional random field approach to super-resolved reconstruction of optical coherence tomography images
    Boroomand, Ameneh
    Wong, Alexander
    Li, Edward
    Cho, Daniel S.
    Ni, Betty
    Bizheva, Kostandinka
    BIOMEDICAL OPTICS EXPRESS, 2013, 4 (10): : 2032 - 2050
  • [26] THE APPROXIMATE DISTRIBUTION OF NONPARAMETRIC REGRESSION ESTIMATES
    ROBINSON, PM
    STATISTICS & PROBABILITY LETTERS, 1995, 23 (02) : 193 - 201
  • [28] What differences a day can make: Quantile regression estimates of the distribution of daily learning gains
    Hayes, Michael S.
    Gershenson, Seth
    ECONOMICS LETTERS, 2016, 141 : 48 - 51
  • [29] Learning theory for distribution regression
    1600, Microtome Publishing (17):
  • [30] Learning Theory for Distribution Regression
    Szabo, Zoltan
    Sriperumbudur, Bharath K.
    Poczos, Barnabas
    Gretton, Arthur
    JOURNAL OF MACHINE LEARNING RESEARCH, 2016, 17