Non-Greedy L21-Norm Maximization for Principal Component Analysis

被引:14
|
作者
Nie, Feiping [1 ,2 ]
Tian, Lai [1 ,2 ]
Huang, Heng [3 ]
Ding, Chris [4 ]
机构
[1] Northwestern Polytech Univ, Sch Comp Sci, Xian 710072, Peoples R China
[2] Northwestern Polytech Univ, Sch Artificial Intelligence Opt & Elect iOPEN, Xian 710072, Peoples R China
[3] Univ Pittsburgh, Dept Elect & Comp Engn, Pittsburgh, PA 15261 USA
[4] Univ Texas Arlington, Dept Comp Sci & Engn, Arlington, TX 76019 USA
基金
中国国家自然科学基金;
关键词
Principal component analysis; Minimization; Covariance matrices; Robustness; Optimization; Convergence; Linear programming; robust dimensionality reduction; L21-norm maximization; FRAMEWORK;
D O I
10.1109/TIP.2021.3073282
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Principal Component Analysis (PCA) is one of the most important unsupervised methods to handle high-dimensional data. However, due to the high computational complexity of its eigen-decomposition solution, it is hard to apply PCA to the large-scale data with high dimensionality, e.g., millions of data points with millions of variables. Meanwhile, the squared L2-norm based objective makes it sensitive to data outliers. In recent research, the L1-norm maximization based PCA method was proposed for efficient computation and being robust to outliers. However, this work used a greedy strategy to solve the eigenvectors. Moreover, the L1-norm maximization based objective may not be the correct robust PCA formulation, because it loses the theoretical connection to the minimization of data reconstruction error, which is one of the most important intuitions and goals of PCA. In this paper, we propose to maximize the L21-norm based robust PCA objective, which is theoretically connected to the minimization of reconstruction error. More importantly, we propose the efficient non-greedy optimization algorithms to solve our objective and the more general L21-norm maximization problem with theoretically guaranteed convergence. Experimental results on real world data sets show the effectiveness of the proposed method for principal component analysis.
引用
收藏
页码:5277 / 5286
页数:10
相关论文
共 50 条
  • [21] Robust and structural sparsity auto-encoder with L21-norm minimization
    Li, Rui
    Wang, Xiaodan
    Quan, Wen
    Song, Yafei
    Lei, Lei
    NEUROCOMPUTING, 2021, 425 : 71 - 81
  • [22] Block principal component analysis with L1-norm for image analysis
    Wang, Haixian
    PATTERN RECOGNITION LETTERS, 2012, 33 (05) : 537 - 542
  • [23] Principal component analysis in an asymmetric norm
    Tran, Ngoc M.
    Burdejova, Petra
    Ospienko, Maria
    Haerdle, Wolfgang K.
    JOURNAL OF MULTIVARIATE ANALYSIS, 2019, 171 : 1 - 21
  • [24] AN EFFICIENT ALGORITHM FOR L1-NORM PRINCIPAL COMPONENT ANALYSIS
    Yu, Linbin
    Zhang, Miao
    Ding, Chris
    2012 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2012, : 1377 - 1380
  • [25] L1-norm projection pursuit principal component analysis
    Choulakian, V
    COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2006, 50 (06) : 1441 - 1451
  • [26] Kernel l1-norm principal component analysis for denoising
    Ling, Xiao
    Bui, Anh
    Brooks, Paul
    OPTIMIZATION LETTERS, 2024, 18 (09) : 2133 - 2148
  • [27] Kernel Entropy Component Analysis with Nongreedy L1-Norm Maximization
    Ji, Haijin
    Huang, Song
    COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2018, 2018
  • [28] Face Recognition using Graph Extreme Learning Machine with L21-norm Regularization
    Abd Shehab, Mohanad
    Kahraman, Nihan
    Bilgin, Gokhan
    2017 10TH INTERNATIONAL CONFERENCE ON ELECTRICAL AND ELECTRONICS ENGINEERING (ELECO), 2017, : 881 - 884
  • [29] Link prediction using deep autoencoder-like non-negative matrix factorization with L21-norm
    Li, Tongfeng
    Zhang, Ruisheng
    Yao, Yabing
    Liu, Yunwu
    Ma, Jun
    APPLIED INTELLIGENCE, 2024, 54 (05) : 4095 - 4120
  • [30] Link prediction using deep autoencoder-like non-negative matrix factorization with L21-norm
    Tongfeng Li
    Ruisheng Zhang
    Yabing Yao
    Yunwu Liu
    Jun Ma
    Applied Intelligence, 2024, 54 : 4095 - 4120