On the Regret Minimization of Nonconvex Online Gradient Ascent for Online PCA

被引:0
|
作者
Garber, Dan [1 ]
机构
[1] Technion Israel Inst Technol, Haifa, Israel
来源
关键词
online learning; regret minimization; online PCA; online convex optimization;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper we focus on the problem of Online Principal Component Analysis in the regret minimization framework. For this problem, all existing regret minimization algorithms for the fully-adversarial setting are based on a positive semidefinite convex relaxation, and hence require quadratic memory and SVD computation (either thin of full) on each iteration, which amounts to at least quadratic runtime per iteration. This is in stark contrast to a corresponding stochastic i.i.d. variant of the problem, which was studied extensively lately, and admits very efficient gradient ascent algorithms that work directly on the natural non-convex formulation of the problem, and hence require only linear memory and linear runtime per iteration. This raises the question: can non-convex online gradient ascent algorithms be shown to minimize regret in online adversarial settings? In this paper we take a step forward towards answering this question. We introduce an adversarially-perturbed spiked-covariance model in which, each data point is assumed to follow a fixed stochastic distribution with a non-zero spectral gap in the covariance matrix, but is then perturbed with some adversarial vector. This model is a natural extension of a well studied standard stochastic setting that allows for non-stationary (adversarial) patterns to arise in the data and hence, might serve as a significantly better approximation for real-world data-streams. We show that in an interesting regime of parameters, when the non-convex online gradient ascent algorithm is initialized with a "warm-start" vector, it provably minimizes the regret with high probability. We further discuss the possibility of computing such a "warm-start" vector, and also the use of regularization to obtain fast regret rates. Our theoretical findings are supported by empirical experiments on both synthetic and real-world data.
引用
收藏
页数:25
相关论文
共 50 条
  • [1] Online PCA with Optimal Regret
    Nie, Jiazhong
    Kotlowski, Wojciech
    Warmuth, Manfred K.
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2016, 17 : 1 - 49
  • [2] Diminishing Regret for Online Nonconvex Optimization
    Park, SangWoo
    Mulvaney-Kemp, Julie
    Jin, Ming
    Lavaei, Javad
    [J]. 2021 AMERICAN CONTROL CONFERENCE (ACC), 2021, : 978 - 985
  • [3] Online Agnostic Boosting via Regret Minimization
    Brukhim, Nataly
    Chen, Xinyi
    Hazan, Elad
    Moran, Shay
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [4] Online Corrupted User Detection and Regret Minimization
    Wang, Zhiyong
    Xie, Jize
    Yu, Tong
    Li, Shuai
    Lui, John C. S.
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [5] Randomized online PCA algorithms with regret bounds that are logarithmic in the dimension
    Warmuth, Manfred K.
    Kuzmin, Dima
    [J]. Journal of Machine Learning Research, 2008, 9 : 2287 - 2320
  • [6] Randomized Online PCA Algorithms with Regret Bounds that are Logarithmic in the Dimension
    Warmuth, Manfred K.
    Kuzmin, Dima
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2008, 9 : 2287 - 2320
  • [7] Equivalence Analysis between Counterfactual Regret Minimization and Online Mirror Descent
    Liu, Weiming
    Jiang, Huacong
    Li, Bin
    Li, Houqiang
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [8] Logarithmic Regret for Online Gradient Descent Beyond Strong Convexity
    Garber, Dan
    [J]. 22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89 : 295 - 303
  • [9] Cautious Regret Minimization: Online Optimization with Long-Term Budget Constraints
    Liakopoulos, Nikolaos
    Destounis, Apostolos
    Paschos, Georgios
    Spyropoulos, Thrasyvoulos
    Mertikopoulos, Panayotis
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [10] Proximal Online Gradient Is Optimum for Dynamic Regret: A General Lower Bound
    Zhao, Yawei
    Qiu, Shuang
    Li, Kuan
    Luo, Lailong
    Yin, Jianping
    Liu, Ji
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (12) : 7755 - 7764