Online gradient descent algorithms for functional data learning

被引:17
|
作者
Chen, Xiaming [1 ]
Tang, Bohao [2 ]
Fan, Jun [3 ]
Guo, Xin [4 ]
机构
[1] Shantou Univ, Dept Comp Sci, Shantou, Peoples R China
[2] Johns Hopkins Bloomberg Sch Publ Hlth, Dept Biostat, Baltimore, MD USA
[3] Hong Kong Baptist Univ, Dept Math, Kowloon, Hong Kong, Peoples R China
[4] Hong Kong Polytech Univ, Dept Appl Math, Kowloon, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Learning theory; Online learning; Gradient descent; Reproducing kernel Hilbert space; Error analysis; RATES; CONVERGENCE; PREDICTION; REGRESSION; MINIMAX; ERROR;
D O I
10.1016/j.jco.2021.101635
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Functional linear model is a fruitfully applied general framework for regression problems, including those with intrinsically infinitedimensional data. Online gradient descent methods, despite their evidenced power of processing online or large-sized data, are not well studied for learning with functional data. In this paper, we study reproducing kernel-based online learning algorithms for functional data, and derive convergence rates for the expected excess prediction risk under both online and finite-horizon settings of step-sizes respectively. It is well understood that nontrivial uniform convergence rates for the estimation task depend on the regularity of the slope function. Surprisingly, the convergence rates we derive for the prediction task can assume no regularity from slope. Our analysis reveals the intrinsic difference between the estimation task and the prediction task in functional data learning. (c) 2021 Elsevier Inc. All rights reserved.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] An online supervised learning method based on gradient descent for spiking neurons
    Xu, Yan
    Yang, Jing
    Zhong, Shuiming
    [J]. NEURAL NETWORKS, 2017, 93 : 7 - 20
  • [22] A Study on Online ARIMA Algorithms applying various gradient descent optimization algorithms for Time Series Prediction
    Lee, Jungi
    Lee, HyunYong
    Kim, NacWoo
    Lee, ByungTak
    [J]. 12TH INTERNATIONAL CONFERENCE ON ICT CONVERGENCE (ICTC 2021): BEYOND THE PANDEMIC ERA WITH ICT CONVERGENCE INNOVATION, 2021, : 1104 - 1106
  • [23] Accelerated Randomized Coordinate Descent Algorithms for Stochastic Optimization and Online Learning
    Bhandari, Akshita
    Singh, Chandramani
    [J]. LEARNING AND INTELLIGENT OPTIMIZATION, LION 12, 2019, 11353 : 1 - 15
  • [24] Properties of the sign gradient descent algorithms
    Moulay, Emmanuel
    Lechappe, Vincent
    Plestan, Franck
    [J]. INFORMATION SCIENCES, 2019, 492 : 29 - 39
  • [25] On the Computational Power of Online Gradient Descent
    Chatziafratis, Vaggos
    Roughgarden, Tim
    Wang, Joshua R.
    [J]. CONFERENCE ON LEARNING THEORY, VOL 99, 2019, 99
  • [26] Coded Decentralized Learning With Gradient Descent for Big Data Analytics
    Yue, Jing
    Xiao, Ming
    [J]. IEEE COMMUNICATIONS LETTERS, 2020, 24 (02) : 362 - 366
  • [27] Variance Reduced Online Gradient Descent for Kernelized Pairwise Learning with Limited Memory
    AlQuabeh, Hilal
    Mukhoty, Bhaskar
    Gu, Bin
    [J]. ASIAN CONFERENCE ON MACHINE LEARNING, VOL 222, 2023, 222
  • [28] Opposite Online Learning via Sequentially Integrated Stochastic Gradient Descent Estimators
    Cui, Wenhai
    Ji, Xiaoting
    Kong, Linglong
    Yan, Xiaodong
    [J]. THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 6, 2023, : 7270 - 7278
  • [29] ONLINE MULTI-LABEL LEARNING WITH ACCELERATED NONSMOOTH STOCHASTIC GRADIENT DESCENT
    Park, Sunho
    Choi, Seungjin
    [J]. 2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, : 3322 - 3326
  • [30] Limited Memory Online Gradient Descent for Kernelized Pairwise Learning with Dynamic Averaging
    AlQuabeh, Hilal
    de Vazelhes, William
    Gu, Bin
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 10, 2024, : 10821 - 10828