Stochastic PCA with l2 and l1 Regularization

被引:0
|
作者
Mianjy, Poorya [1 ]
Arora, Raman [1 ]
机构
[1] Johns Hopkins Univ, Dept Comp Sci, Baltimore, MD 21218 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We revisit convex relaxation based methods for stochastic optimization of principal component analysis (PCA). While methods that directly solve the nonconvex problem have been shown to be optimal in terms of statistical and computational efficiency, the methods based on convex relaxation have been shown to enjoy comparable, or even superior, empirical performance - this motivates the need for a deeper formal understanding of the latter. Therefore, in this paper, we study variants of stochastic gradient descent for a convex relaxation of PCA with (a) l(2), (b) l(1), and (c) elastic net (l(1)+l(2)) regularization in the hope that these variants yield (a) better iteration complexity, (b) better control on the rank of the intermediate iterates, and (c) both, respectively. We show, theoretically and empirically, that compared to previous work on convex relaxation based methods, the proposed variants yield faster convergence and improve overall runtime to achieve a certain user-specified epsilon-suboptimality on the PCA objective. Furthermore, the proposed methods are shown to converge both in terms of the PCA objective as well as the distance between subspaces. However, there still remains a gap in computational requirements for the proposed methods when compared with existing nonconvex approaches.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] αl1 - βl2 regularization for sparse recovery
    Ding, Liang
    Han, Weimin
    [J]. INVERSE PROBLEMS, 2019, 35 (12)
  • [2] An l2/l1 regularization framework for diverse learning tasks
    Wang, Shengzheng
    Peng, Jing
    Liu, Wei
    [J]. SIGNAL PROCESSING, 2015, 109 : 206 - 211
  • [3] Group analysis of fMRI data using L1 and L2 regularization
    Overholser, Rosanna
    Xu, Ronghui
    [J]. STATISTICS AND ITS INTERFACE, 2015, 8 (03) : 379 - 390
  • [4] Morozov's Discrepancy Principle For αl1 - βl2 Sparsity Regularization
    Ding, Liang
    Han, Weimin
    [J]. INVERSE PROBLEMS AND IMAGING, 2023, 17 (01) : 157 - 179
  • [5] Blind Image Restoration Based on l1 - l2 Blur Regularization
    Xiao, Su
    [J]. ENGINEERING LETTERS, 2020, 28 (01) : 148 - 154
  • [6] Combined l2 data and gradient fitting in conjunction with l1 regularization
    Didas, Stephan
    Setzer, Simon
    Steidl, Gabriele
    [J]. ADVANCES IN COMPUTATIONAL MATHEMATICS, 2009, 30 (01) : 79 - 99
  • [7] Sparse portfolio optimization via l1 over l2 regularization
    Wu, Zhongming
    Sun, Kexin
    Ge, Zhili
    Allen-Zhao, Zhihua
    Zeng, Tieyong
    [J]. EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2024, 319 (03) : 820 - 833
  • [8] Blind image restoration based on l1 − l2 blur regularization
    School of Computer Science and Technology, Huaibei Normal University, Huaibei
    Anhui Province
    235000, China
    [J]. Eng. Lett., 2020, 1 (148-154):
  • [9] L1/2 regularization
    ZongBen Xu
    Hai Zhang
    Yao Wang
    XiangYu Chang
    Yong Liang
    [J]. Science China Information Sciences, 2010, 53 : 1159 - 1169
  • [10] WILLETTS 'L1 AND L2'
    CRAIG, R
    [J]. DRAMA, 1976, (120): : 72 - 73