Avoiding Optimal Mean Robust and Sparse BPCA with L1-norm Maximization

被引:0
|
作者
Tang, Ganyi [1 ]
Fan, Lili [1 ]
Shi, Jianguo [1 ]
Tan, Jingjing [1 ]
Lu, Guifu [1 ]
机构
[1] Anhui Polytech Univ, Sch Comp & Informat, Wuhu, Peoples R China
来源
JOURNAL OF INTERNET TECHNOLOGY | 2023年 / 24卷 / 04期
基金
中国国家自然科学基金; 安徽省自然科学基金;
关键词
BPCA; Avoiding optimal mean; Sparse modeling; L1-norm; Elastic net; PRINCIPAL COMPONENT ANALYSIS; 2-DIMENSIONAL PCA; NORM; 2DPCA;
D O I
10.53106/160792642023072404016
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, the robust PCA/2DPCA methods have achieved great success in subspace learning. Nevertheless, most of them have a basic premise that the average of samples is zero and the optimal mean is the center of the data. Actually, this premise only applies to PCA/2DPCA methods based on L2 -norm. The robust PCA/2DPCA method with L1-norm has an optimal mean deviate from zero, and the estimation of the optimal mean leads to an expensive calculation. Another shortcoming of PCA/2DPCA is that it does not pay enough attention to the instinct correlation within the part of data. To tackle these issues, we introduce the maximum variance of samples' difference into Block principal component analysis (BPCA) and propose a robust method for avoiding the optimal mean to extract orthonormal features. BPCA, which is generalized from PCA and 2DPCA, is a general PCA/2DPCA framework specialized in part learning, can makes better use of the partial correlation. However, projection features without sparsity not only have higher computational complexity, but also lack semantic properties. We integrate the elastic network into avoiding optimal mean robust BPCA to perform sparse constraints on projection features. These two BPCA methods (non-sparse and sparse) make the presumption of zero-mean data unnecessary and avoid optimal mean calculation. Experiments on reference benchmark databases indicate the usefulness of the proposed two methods in image classification and image reconstruction.
引用
收藏
页码:989 / 1000
页数:12
相关论文
共 50 条
  • [31] Sparse index clones via the sorted l1-Norm
    Kremer, Philipp J.
    Brzyski, Damian
    Bogdan, Malgorzata
    Paterlini, Sandra
    QUANTITATIVE FINANCE, 2022, 22 (02) : 349 - 366
  • [32] Block Principal Component Analysis With Nongreedy l1-Norm Maximization
    Li, Bing Nan
    Yu, Qiang
    Wang, Rong
    Xiang, Kui
    Wang, Meng
    Li, Xuelong
    IEEE TRANSACTIONS ON CYBERNETICS, 2016, 46 (11) : 2543 - 2547
  • [33] Discriminant Locality Preserving Projections Based on L1-Norm Maximization
    Zhong, Fujin
    Zhang, Jiashu
    Li, Defang
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2014, 25 (11) : 2065 - 2074
  • [34] L1-norm plus L2-norm sparse parameter for image recognition
    Feng, Qingxiang
    Zhu, Qi
    Tang, Lin-Lin
    Pan, Jeng-Shyang
    OPTIK, 2015, 126 (23): : 4078 - 4082
  • [35] Robust Plane Clustering Based on L1-Norm Minimization
    Yang, Hongxin
    Yang, Xubing
    Zhang, Fuquan
    Ye, Qiaolin
    IEEE ACCESS, 2020, 8 : 29489 - 29500
  • [36] Robust polynomial classifier using L1-norm minimization
    K. Assaleh
    T. Shanableh
    Applied Intelligence, 2010, 33 : 330 - 339
  • [37] Sparse Blind Source Separation via l1-Norm Optimization
    Georgiou, Tryphon T.
    Tannenbaum, Allen
    PERSPECTIVES IN MATHEMATICAL SYSTEM THEORY, CONTROL, AND SIGNAL PROCESSING, 2010, 398 : 321 - +
  • [38] Robust Singular Values based on L1-norm PCA
    Le, Duc H.
    Markopoulos, Panos P.
    2022 IEEE WORKSHOP ON SIGNAL PROCESSING SYSTEMS (SIPS), 2022, : 67 - 72
  • [39] Combining L1-norm and L2-norm based sparse representations for face recognition
    Zhu, Qi
    Zhang, Daoqiang
    Sun, Han
    Li, Zhengming
    OPTIK, 2015, 126 (7-8): : 719 - 724
  • [40] Fast sparse representation model for l1-norm minimisation problem
    Peng, C. Y.
    Li, J. W.
    ELECTRONICS LETTERS, 2012, 48 (03) : 154 - U42