Improving kernel online learning with a snapshot memory

被引:0
|
作者
Trung Le
Khanh Nguyen
Dinh Phung
机构
[1] Department of Data Science and AI,
[2] Monash University,undefined
[3] VinAI Research,undefined
来源
Machine Learning | 2022年 / 111卷
关键词
Kernel online learning; Incremental stochastic gradient descent; Online learning; Kernel methods; Stochastic optimization;
D O I
暂无
中图分类号
学科分类号
摘要
We propose in this paper the Stochastic Variance-reduced Gradient Descent for Kernel Online Learning (DualSVRG), which obtains the ε\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon$$\end{document}-approximate linear convergence rate and is not vulnerable to the curse of kernelization. Our approach uses a variance reduction technique to reduce the variance when estimating full gradient, and further exploits recent work in dual space gradient descent for online learning to achieve model optimality. This is achieved by introducing the concept of an instant memory, which is a snapshot storing the most recent incoming data instances and proposing three transformer oracles, namely budget, coverage, and always-move oracles. We further develop rigorous theoretical analysis to demonstrate that our proposed approach can obtain the ε\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon$$\end{document}-approximate linear convergence rate, while maintaining model sparsity, hence encourages fast training. We conduct extensive experiments on several benchmark datasets to compare our DualSVRG with state-of-the-art baselines in both batch and online settings. The experimental results show that our DualSVRG yields superior predictive performance, while spending comparable training time with baselines.
引用
收藏
页码:997 / 1018
页数:21
相关论文
共 50 条
  • [41] The Hopfield associative memory network:: Improving performance with the kernel "trick"
    García, C
    Moreno, JA
    ADVANCES IN ARTIFICIAL INTELLIGENCE - IBERAMIA 2004, 2004, 3315 : 871 - 880
  • [42] Online adaptation of kernel learning adaptive predictive controller
    Liu, Yi
    Yu, Hai-Qing
    Gao, Zeng-Liang
    Wang, Hai-Qing
    Li, Ping
    Kongzhi Lilun Yu Yingyong/Control Theory and Applications, 2011, 28 (09): : 1099 - 1104
  • [43] On the Impact of Network Topology on Distributed Online Kernel Learning
    Han Nay Aung
    Ohsaki, Hiroyuki
    2024 IEEE 48TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE, COMPSAC 2024, 2024, : 421 - 428
  • [44] Online structural learning with dense samples and a weighting kernel
    Yu, Xianguo
    Yu, Qifeng
    PATTERN RECOGNITION LETTERS, 2018, 105 : 59 - 66
  • [45] Distributed and Quantized Online Multi-Kernel Learning
    Shen, Yanning
    Karimi-Bidhendi, Saeed
    Jafarkhani, Hamid
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2021, 69 : 5496 - 5511
  • [46] A Linear Incremental Nystrom Method for Online Kernel Learning
    Xu, Shan
    Zhang, Xiao
    Liao, Shizhong
    2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 2256 - 2261
  • [47] Asynchronous pseudo physical memory snapshot and forensics on paravirtualized VMM using split kernel module
    Ando, Ruo
    Kadobayashi, Youki
    Shinoda, Youichi
    INFORMATION SECURITY AND CRYPTOLOGY - ICISC 2007, 2007, 4817 : 131 - 143
  • [48] Individualized learning for improving kernel Fisher discriminant analysis
    Fan, Zizhu
    Xu, Yong
    Ni, Ming
    Fang, Xiaozhao
    Zhang, David
    PATTERN RECOGNITION, 2016, 58 : 100 - 109
  • [49] Online continual learning with declarative memory
    Xiao, Zhe
    Du, Zhekai
    Wang, Ruijin
    Gan, Ruimeng
    Li, Jingjing
    NEURAL NETWORKS, 2023, 163 : 146 - 155
  • [50] Memory Efficient Online Meta Learning
    Acar, Durmus Alp Emre
    Zhu, Ruizhao
    Saligrama, Venkatesh
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139