Avoiding Optimal Mean Robust and Sparse BPCA with L1-norm Maximization

被引:0
|
作者
Tang, Ganyi [1 ]
Fan, Lili [1 ]
Shi, Jianguo [1 ]
Tan, Jingjing [1 ]
Lu, Guifu [1 ]
机构
[1] Anhui Polytech Univ, Sch Comp & Informat, Wuhu, Peoples R China
来源
JOURNAL OF INTERNET TECHNOLOGY | 2023年 / 24卷 / 04期
基金
中国国家自然科学基金; 安徽省自然科学基金;
关键词
BPCA; Avoiding optimal mean; Sparse modeling; L1-norm; Elastic net; PRINCIPAL COMPONENT ANALYSIS; 2-DIMENSIONAL PCA; NORM; 2DPCA;
D O I
10.53106/160792642023072404016
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, the robust PCA/2DPCA methods have achieved great success in subspace learning. Nevertheless, most of them have a basic premise that the average of samples is zero and the optimal mean is the center of the data. Actually, this premise only applies to PCA/2DPCA methods based on L2 -norm. The robust PCA/2DPCA method with L1-norm has an optimal mean deviate from zero, and the estimation of the optimal mean leads to an expensive calculation. Another shortcoming of PCA/2DPCA is that it does not pay enough attention to the instinct correlation within the part of data. To tackle these issues, we introduce the maximum variance of samples' difference into Block principal component analysis (BPCA) and propose a robust method for avoiding the optimal mean to extract orthonormal features. BPCA, which is generalized from PCA and 2DPCA, is a general PCA/2DPCA framework specialized in part learning, can makes better use of the partial correlation. However, projection features without sparsity not only have higher computational complexity, but also lack semantic properties. We integrate the elastic network into avoiding optimal mean robust BPCA to perform sparse constraints on projection features. These two BPCA methods (non-sparse and sparse) make the presumption of zero-mean data unnecessary and avoid optimal mean calculation. Experiments on reference benchmark databases indicate the usefulness of the proposed two methods in image classification and image reconstruction.
引用
收藏
页码:989 / 1000
页数:12
相关论文
共 50 条
  • [1] Improve robustness of sparse PCA by L1-norm maximization
    Meng, Deyu
    Zhao, Qian
    Xu, Zongben
    PATTERN RECOGNITION, 2012, 45 (01) : 487 - 497
  • [2] Robust DLPP With Nongreedy l1-Norm Minimization and Maximization
    Wang, Qianqian
    Gao, Quanxue
    Xie, Deyan
    Gao, Xinbo
    Wang, Yong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (03) : 738 - 743
  • [4] Learning robust principal components from L1-norm maximization
    Dingcheng FENG Feng CHEN Wenli XU Tsinghua National Laboratory for Information Science and TechnologyTsinghua UniversityBeijing China Department of AutomationTsinghua UniversityBeijing China
    JournalofZhejiangUniversity-ScienceC(Computers&Electronics), 2012, 13 (12) : 901 - 908
  • [5] Avoiding Optimal Mean l2,1-Norm Maximization-Based Robust PCA for Reconstruction
    Luo, Minnan
    Nie, Feiping
    Chang, Xiaojun
    Yang, Yi
    Hauptmann, Alexander G.
    Zheng, Qinghua
    NEURAL COMPUTATION, 2017, 29 (04) : 1124 - 1150
  • [6] Image Set Representation with L1-Norm Optimal Mean Robust Principal Component Analysis
    Cao, Youxia
    Jiang, Bo
    Tang, Jin
    Luo, Bin
    IMAGE AND GRAPHICS (ICIG 2017), PT II, 2017, 10667 : 119 - 128
  • [7] Robust Tensor Analysis with Non-Greedy l1-Norm Maximization
    Zhao, Limei
    Jia, Weimin
    Wang, Rong
    Yu, Qiang
    RADIOENGINEERING, 2016, 25 (01) : 200 - 207
  • [8] OPTIMAL SPARSE L1-NORM PRINCIPAL-COMPONENT ANALYSIS
    Chamadia, Shubham
    Pados, Dimitris A.
    2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2017, : 2686 - 2690
  • [9] Bayesian L1-norm sparse learning
    Lin, Yuanqing
    Lee, Daniel D.
    2006 IEEE International Conference on Acoustics, Speech and Signal Processing, Vols 1-13, 2006, : 5463 - 5466
  • [10] 2DPCA with L1-norm for simultaneously robust and sparse modelling
    Wang, Haixian
    Wang, Jing
    NEURAL NETWORKS, 2013, 46 : 190 - 198