On the L1-norm Approximation of a Matrix by Another of Lower Rank

被引:0
|
作者
Tsagkarakis, Nicholas [1 ]
Markopoulos, Panos P. [2 ]
Pados, Dimitris A. [1 ]
机构
[1] SUNY Buffalo, Dept Elect Engn, Buffalo, NY 14260 USA
[2] Rochester Inst Technol, Dept Elect & Microelect Engn, Rochester, NY 14623 USA
基金
美国国家科学基金会;
关键词
Low rank approximation; L-1-norm; principal component analysis (PCA); erroneous data; faulty measurements; machine learning; outlier resistance; subspace signal processing; uniform feature preservation; FACTORIZATION;
D O I
10.1109/ICMLA.2016.133
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the past decade, there has been a growing documented effort to approximate a matrix by another of lower rank minimizing the L(1-)norm of the residual matrix. In this paper, we first show that the problem is NP-hard. Then, we introduce a theorem on the sparsity of the residual matrix. The theorem sets the foundation for a novel algorithm that outperforms all existing counterparts in the L-1-norm error minimization metric and exhibits high outlier resistance in comparison to usual L-2-norm error minimization in machine learning applications.
引用
收藏
页码:768 / 773
页数:6
相关论文
共 50 条
  • [1] Practical Low-Rank Matrix Approximation under Robust L1-Norm
    Zheng, Yinqiang
    Liu, Guangcan
    Sugimoto, Shigeki
    Yan, Shuicheng
    Okutomi, Masatoshi
    [J]. 2012 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2012, : 1410 - 1417
  • [2] On the Complexity of Robust PCA and l1-Norm Low-Rank Matrix Approximation
    Gillis, Nicolas
    Vavasis, Stephen A.
    [J]. MATHEMATICS OF OPERATIONS RESEARCH, 2018, 43 (04) : 1072 - 1084
  • [3] Low Rank Approximation with Entrywise l1-Norm Error
    Song, Zhao
    Woodruff, David P.
    Zhong, Peilin
    [J]. STOC'17: PROCEEDINGS OF THE 49TH ANNUAL ACM SIGACT SYMPOSIUM ON THEORY OF COMPUTING, 2017, : 688 - 701
  • [4] L1-norm low-rank linear approximation for accelerating deep neural networks: L1-norm low-rank linear approximation for accelerating deep neural networks
    Zhao Z.
    Wang H.
    Sun H.
    He Z.
    [J]. He, Zhihai (hezhi@missouri.edu), 2020, Elsevier B.V., Netherlands (400) : 216 - 226
  • [5] THE APPROXIMATION OF ONE MATRIX BY ANOTHER OF LOWER RANK
    Eckart, Carl
    Young, Gale
    [J]. PSYCHOMETRIKA, 1936, 1 (03) : 211 - 218
  • [6] Low-rank matrix decomposition in L1-norm by dynamic systems
    Liu, Yiguang
    Liu, Bingbing
    Pu, Yifei
    Chen, Xiaohui
    Cheng, Hong
    [J]. IMAGE AND VISION COMPUTING, 2012, 30 (11) : 915 - 921
  • [7] Trigonometric approximation of functions in L1-norm
    Chandra, Prem
    Karanjgaokar, Varsha
    [J]. PERIODICA MATHEMATICA HUNGARICA, 2022, 84 (02) : 177 - 185
  • [8] Rank-One Matrix Completion With Automatic Rank Estimation via L1-Norm Regularization
    Shi, Qiquan
    Lu, Haiping
    Cheung, Yiu-Ming
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (10) : 4744 - 4757
  • [9] L1-Norm Low-Rank Matrix Factorization by Variational Bayesian Method
    Zhao, Qian
    Meng, Deyu
    Xu, Zongben
    Zuo, Wangmeng
    Yan, Yan
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2015, 26 (04) : 825 - 839
  • [10] L1-Norm Low-Rank Matrix Decomposition by Neural Networks and Mollifiers
    Liu, Yiguang
    Yang, Songfan
    Wu, Pengfei
    Li, Chunguang
    Yang, Menglong
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2016, 27 (02) : 273 - 283