Online gradient descent algorithms for functional data learning

被引:17
|
作者
Chen, Xiaming [1 ]
Tang, Bohao [2 ]
Fan, Jun [3 ]
Guo, Xin [4 ]
机构
[1] Shantou Univ, Dept Comp Sci, Shantou, Peoples R China
[2] Johns Hopkins Bloomberg Sch Publ Hlth, Dept Biostat, Baltimore, MD USA
[3] Hong Kong Baptist Univ, Dept Math, Kowloon, Hong Kong, Peoples R China
[4] Hong Kong Polytech Univ, Dept Appl Math, Kowloon, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Learning theory; Online learning; Gradient descent; Reproducing kernel Hilbert space; Error analysis; RATES; CONVERGENCE; PREDICTION; REGRESSION; MINIMAX; ERROR;
D O I
10.1016/j.jco.2021.101635
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Functional linear model is a fruitfully applied general framework for regression problems, including those with intrinsically infinitedimensional data. Online gradient descent methods, despite their evidenced power of processing online or large-sized data, are not well studied for learning with functional data. In this paper, we study reproducing kernel-based online learning algorithms for functional data, and derive convergence rates for the expected excess prediction risk under both online and finite-horizon settings of step-sizes respectively. It is well understood that nontrivial uniform convergence rates for the estimation task depend on the regularity of the slope function. Surprisingly, the convergence rates we derive for the prediction task can assume no regularity from slope. Our analysis reveals the intrinsic difference between the estimation task and the prediction task in functional data learning. (c) 2021 Elsevier Inc. All rights reserved.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Online Gradient Descent Learning Algorithms
    Yiming Ying
    Massimiliano Pontil
    [J]. Foundations of Computational Mathematics, 2008, 8 : 561 - 596
  • [2] Online gradient descent learning algorithms
    Ying, Yiming
    Pontil, Massimiliano
    [J]. FOUNDATIONS OF COMPUTATIONAL MATHEMATICS, 2008, 8 (05) : 561 - 596
  • [3] Online Learning With Inexact Proximal Online Gradient Descent Algorithms
    Dixit, Rishabh
    Bedi, Unlit Singh
    Tripathi, Ruchi
    Rajawat, Ketan
    [J]. IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2019, 67 (05) : 1338 - 1352
  • [4] Simple Stochastic and Online Gradient Descent Algorithms for Pairwise Learning
    Yang, Zhenhuan
    Lei, Yunwen
    Wang, Puyu
    Yang, Tianbao
    Ying, Yiming
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [5] LEARNING BY ONLINE GRADIENT DESCENT
    BIEHL, M
    SCHWARZE, H
    [J]. JOURNAL OF PHYSICS A-MATHEMATICAL AND GENERAL, 1995, 28 (03): : 643 - 656
  • [6] On the momentum term in gradient descent learning algorithms
    Qian, N
    [J]. NEURAL NETWORKS, 1999, 12 (01) : 145 - 151
  • [7] Distributed Gradient Descent for Functional Learning
    Yu, Zhan
    Fan, Jun
    Shi, Zhongjie
    Zhou, Ding-Xuan
    [J]. IEEE TRANSACTIONS ON INFORMATION THEORY, 2024, 70 (09) : 6547 - 6571
  • [8] Dual Space Gradient Descent for Online Learning
    Trung Le
    Tu Dinh Nguyen
    Vu Nguyen
    Dinh Phung
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [9] Online learning via congregational gradient descent
    Kim L. Blackmore
    Robert C. Williamson
    Iven M. Y. Mareels
    William A. Sethares
    [J]. Mathematics of Control, Signals and Systems, 1997, 10 : 331 - 363
  • [10] Online learning via congregational gradient descent
    Blackmore, RL
    Williamson, RC
    Mareels, IMY
    Sethares, WA
    [J]. MATHEMATICS OF CONTROL SIGNALS AND SYSTEMS, 1997, 10 (04) : 331 - 363