Avoiding Optimal Mean Robust and Sparse BPCA with L1-norm Maximization

被引:0
|
作者
Tang, Ganyi [1 ]
Fan, Lili [1 ]
Shi, Jianguo [1 ]
Tan, Jingjing [1 ]
Lu, Guifu [1 ]
机构
[1] Anhui Polytech Univ, Sch Comp & Informat, Wuhu, Peoples R China
来源
JOURNAL OF INTERNET TECHNOLOGY | 2023年 / 24卷 / 04期
基金
中国国家自然科学基金; 安徽省自然科学基金;
关键词
BPCA; Avoiding optimal mean; Sparse modeling; L1-norm; Elastic net; PRINCIPAL COMPONENT ANALYSIS; 2-DIMENSIONAL PCA; NORM; 2DPCA;
D O I
10.53106/160792642023072404016
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, the robust PCA/2DPCA methods have achieved great success in subspace learning. Nevertheless, most of them have a basic premise that the average of samples is zero and the optimal mean is the center of the data. Actually, this premise only applies to PCA/2DPCA methods based on L2 -norm. The robust PCA/2DPCA method with L1-norm has an optimal mean deviate from zero, and the estimation of the optimal mean leads to an expensive calculation. Another shortcoming of PCA/2DPCA is that it does not pay enough attention to the instinct correlation within the part of data. To tackle these issues, we introduce the maximum variance of samples' difference into Block principal component analysis (BPCA) and propose a robust method for avoiding the optimal mean to extract orthonormal features. BPCA, which is generalized from PCA and 2DPCA, is a general PCA/2DPCA framework specialized in part learning, can makes better use of the partial correlation. However, projection features without sparsity not only have higher computational complexity, but also lack semantic properties. We integrate the elastic network into avoiding optimal mean robust BPCA to perform sparse constraints on projection features. These two BPCA methods (non-sparse and sparse) make the presumption of zero-mean data unnecessary and avoid optimal mean calculation. Experiments on reference benchmark databases indicate the usefulness of the proposed two methods in image classification and image reconstruction.
引用
收藏
页码:989 / 1000
页数:12
相关论文
共 50 条
  • [41] DICTIONARY LEARNING FOR SPARSE REPRESENTATION USING WEIGHTED l1-NORM
    Zhao, Haoli
    Ding, Shuxue
    Li, Yujie
    Li, Zhenni
    Li, Xiang
    Tan, Benying
    2016 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP), 2016, : 292 - 296
  • [42] Pruning filters with L1-norm and capped L1-norm for CNN compression
    Aakash Kumar
    Ali Muhammad Shaikh
    Yun Li
    Hazrat Bilal
    Baoqun Yin
    Applied Intelligence, 2021, 51 : 1152 - 1160
  • [43] Pruning filters with L1-norm and capped L1-norm for CNN compression
    Kumar, Aakash
    Shaikh, Ali Muhammad
    Li, Yun
    Bilal, Hazrat
    Yin, Baoqun
    APPLIED INTELLIGENCE, 2021, 51 (02) : 1152 - 1160
  • [44] l1-Norm Iterative Wiener Filter for Sparse Channel Estimation
    Lim, Jun-seok
    CIRCUITS SYSTEMS AND SIGNAL PROCESSING, 2020, 39 (12) : 6386 - 6393
  • [46] Optimal induced l1-norm state feedback control
    Yu, J
    Sideris, A
    AUTOMATICA, 1999, 35 (05) : 819 - 827
  • [47] Optimal induced l1-norm state feedback control
    Yu, J
    Sideris, A
    PROCEEDINGS OF THE 36TH IEEE CONFERENCE ON DECISION AND CONTROL, VOLS 1-5, 1997, : 1558 - 1563
  • [48] Robust nonnegative mixed-norm algorithm with weighted l1-norm regularization
    Wu, Yifan
    Ni, Jingen
    CONFERENCE PROCEEDINGS OF 2019 IEEE INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING, COMMUNICATIONS AND COMPUTING (IEEE ICSPCC 2019), 2019,
  • [49] Notes on quantum coherence with l1-norm and convex-roof l1-norm
    Zhu, Jiayao
    Ma, Jian
    Zhang, Tinggui
    QUANTUM INFORMATION PROCESSING, 2021, 20 (12)
  • [50] Robust capped L1-norm twin support vector machine
    Wang, Chunyan
    Ye, Qiaolin
    Luo, Peng
    Ye, Ning
    Fu, Liyong
    NEURAL NETWORKS, 2019, 114 : 47 - 59