Estimates on learning rates for multi-penalty distribution regression

被引:0
|
作者
Yu, Zhan [1 ]
Ho, Daniel W.C. [2 ]
机构
[1] Department of Mathematics, Hong Kong Baptist University, 224 Waterloo Road, Kowloon Tong, Hong Kong
[2] Department of Mathematics, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon Tong, Hong Kong
关键词
Regression analysis;
D O I
暂无
中图分类号
学科分类号
摘要
This paper is concerned with functional learning by utilizing two-stage sampled distribution regression. We study a multi-penalty regularization algorithm for distribution regression in the framework of learning theory. The algorithm aims at regressing to real-valued outputs from probability measures. The theoretical analysis of distribution regression is far from maturity and quite challenging since only second-stage samples are observable in practical settings. In our algorithm, to transform information of distribution samples, we embed the distributions to a reproducing kernel Hilbert space HK associated with Mercer kernel K via mean embedding technique. One of the primary contributions of this work is the introduction of a novel multi-penalty regularization algorithm, which is able to capture more potential features of distribution regression. Optimal learning rates of the algorithm are obtained under mild conditions. The work also derives learning rates for distribution regression in the hard learning scenario fρ∉HK, which has not been explored in the existing literature. Moreover, we propose a new distribution-regression-based distributed learning algorithm to face large-scale data or information challenges arising from distribution data. The optimal learning rates are derived for the distributed learning algorithm. By providing new algorithms and showing their learning rates, the work improves the existing literature in various aspects. © 2023 Elsevier Inc.
引用
收藏
相关论文
共 50 条
  • [1] Estimates on learning rates for multi-penalty distribution regression
    Yu, Zhan
    Ho, Daniel W. C.
    APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS, 2024, 69
  • [2] Multi-penalty regularization in learning theory
    Abhishake
    Sivananthan, S.
    JOURNAL OF COMPLEXITY, 2016, 36 : 141 - 165
  • [3] Distributed learning with multi-penalty regularization
    Guo, Zheng-Chu
    Lin, Shao-Bo
    Shi, Lei
    APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS, 2019, 46 (03) : 478 - 499
  • [4] Convergence analysis of distributed multi-penalty regularized pairwise learning
    Hu, Ting
    Fan, Jun
    Xiang, Dao-Hong
    ANALYSIS AND APPLICATIONS, 2020, 18 (01) : 109 - 127
  • [5] Fast Cross-validation for Multi-penalty High-dimensional Ridge Regression
    van de Wiel, Mark A.
    van Nee, Mirrelijn M.
    Rauschenberger, Armin
    JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2021, 30 (04) : 835 - 847
  • [6] Multi-penalty Regularization Inversion in Dynamic Light Scattering
    Xiu Wen-zheng
    Shen Jin
    Xu Min
    Zhu Xin-jun
    Gao Ming-liang
    Liu Wei
    Wang Ya-jing
    ACTA PHOTONICA SINICA, 2018, 47 (01)
  • [7] Multi-penalty regularization with a component-wise penalization
    Naumova, V.
    Pereverzyev, S. V.
    INVERSE PROBLEMS, 2013, 29 (07)
  • [8] Improved Covariance Matrix Estimators by Multi-Penalty Regularization
    Zhang, Bin
    Zhou, Jie
    Li, Jianbo
    2019 22ND INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION 2019), 2019,
  • [9] Gradient-based bilevel optimization for multi-penalty Ridge regression through matrix differential calculus
    Maroni, Gabriele
    Cannelli, Loris
    Piga, Dario
    EUROPEAN JOURNAL OF CONTROL, 2025, 81
  • [10] Adaptive multi-penalty regularization based on a generalized Lasso path
    Grasmair, Markus
    Klock, Timo
    Naumova, Valeriya
    APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS, 2020, 49 (01) : 30 - 55