Avoiding Optimal Mean Robust and Sparse BPCA with L1-norm Maximization

被引:0
|
作者
Tang, Ganyi [1 ]
Fan, Lili [1 ]
Shi, Jianguo [1 ]
Tan, Jingjing [1 ]
Lu, Guifu [1 ]
机构
[1] Anhui Polytech Univ, Sch Comp & Informat, Wuhu, Peoples R China
来源
JOURNAL OF INTERNET TECHNOLOGY | 2023年 / 24卷 / 04期
基金
中国国家自然科学基金; 安徽省自然科学基金;
关键词
BPCA; Avoiding optimal mean; Sparse modeling; L1-norm; Elastic net; PRINCIPAL COMPONENT ANALYSIS; 2-DIMENSIONAL PCA; NORM; 2DPCA;
D O I
10.53106/160792642023072404016
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, the robust PCA/2DPCA methods have achieved great success in subspace learning. Nevertheless, most of them have a basic premise that the average of samples is zero and the optimal mean is the center of the data. Actually, this premise only applies to PCA/2DPCA methods based on L2 -norm. The robust PCA/2DPCA method with L1-norm has an optimal mean deviate from zero, and the estimation of the optimal mean leads to an expensive calculation. Another shortcoming of PCA/2DPCA is that it does not pay enough attention to the instinct correlation within the part of data. To tackle these issues, we introduce the maximum variance of samples' difference into Block principal component analysis (BPCA) and propose a robust method for avoiding the optimal mean to extract orthonormal features. BPCA, which is generalized from PCA and 2DPCA, is a general PCA/2DPCA framework specialized in part learning, can makes better use of the partial correlation. However, projection features without sparsity not only have higher computational complexity, but also lack semantic properties. We integrate the elastic network into avoiding optimal mean robust BPCA to perform sparse constraints on projection features. These two BPCA methods (non-sparse and sparse) make the presumption of zero-mean data unnecessary and avoid optimal mean calculation. Experiments on reference benchmark databases indicate the usefulness of the proposed two methods in image classification and image reconstruction.
引用
收藏
页码:989 / 1000
页数:12
相关论文
共 50 条
  • [21] Principal component analysis based on L1-norm maximization
    Kwak, Nojun
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2008, 30 (09) : 1672 - 1680
  • [22] Elastic preserving projections based on L1-norm maximization
    Sen Yuan
    Xia Mao
    Lijiang Chen
    Multimedia Tools and Applications, 2018, 77 : 21671 - 21691
  • [23] Sparse Least Mean Fourth Filter with Zero-Attracting l1-Norm Constraint
    Gui, Guan
    Adachi, Fumiyuki
    2013 9TH INTERNATIONAL CONFERENCE ON INFORMATION, COMMUNICATIONS AND SIGNAL PROCESSING (ICICS), 2013,
  • [24] Sparse Representation Classification Based Linear Integration of l1-norm and l2-norm for Robust Face Recognition
    Awedat, Khalfalla
    Essa, Almabrok
    Asari, Vijayan
    2017 IEEE INTERNATIONAL CONFERENCE ON ELECTRO INFORMATION TECHNOLOGY (EIT), 2017, : 447 - 451
  • [25] Robust Dictionary Learning with Capped l1-Norm
    Jiang, Wenhao
    Nie, Feiping
    Huang, Heng
    PROCEEDINGS OF THE TWENTY-FOURTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI), 2015, : 3590 - 3596
  • [26] Robust 2DPCA With Non-greedy l1-Norm Maximization for Image Analysis
    Wang, Rong
    Nie, Feiping
    Yang, Xiaojun
    Gao, Feifei
    Yao, Minli
    IEEE TRANSACTIONS ON CYBERNETICS, 2015, 45 (05) : 1108 - 1112
  • [27] Inference robust to outliers with l1-norm penalization
    Beyhum, Jad
    ESAIM-PROBABILITY AND STATISTICS, 2020, 24 : 688 - 702
  • [28] BEYOND l1-NORM MINIMIZATION FOR SPARSE SIGNAL RECOVERY
    Mansour, Hassan
    2012 IEEE STATISTICAL SIGNAL PROCESSING WORKSHOP (SSP), 2012, : 337 - 340
  • [29] Sparse portfolio selection via the sorted l1-Norm
    Kremer, Philipp J.
    Lee, Sangkyun
    Bogdan, Malgorzata
    Paterlini, Sandra
    JOURNAL OF BANKING & FINANCE, 2020, 110
  • [30] Kernel Entropy Component Analysis with Nongreedy L1-Norm Maximization
    Ji, Haijin
    Huang, Song
    COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2018, 2018