Robust Tensor Completion via Capped Frobenius Norm

被引:12
|
作者
Li, Xiao Peng
Wang, Zhi-Yong [1 ]
Shi, Zhang-Lei [2 ]
So, Hing Cheung [1 ]
Sidiropoulos, Nicholas D. [3 ]
机构
[1] City Univ Hong Kong, Dept Elect Engn, Hong Kong, Peoples R China
[2] China Univ Petr East China, Coll Sci, Qingdao 266580, Peoples R China
[3] Univ Virginia, Dept Elect & Comp Engn, Charlottesville, VA 22904 USA
关键词
Capped Frobenius norm; proximal block coordinate descent; robust recovery; tensor completion (TC); tensor ring; FACTORIZATION; IMAGE; MATRIX; OPTIMIZATION; TUTORIAL; TRACKING; RECOVERY; PCA;
D O I
10.1109/TNNLS.2023.3236415
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Tensor completion (TC) refers to restoring the missing entries in a given tensor by making use of the low-rank structure. Most existing algorithms have excellent performance in Gaussian noise or impulsive noise scenarios. Generally speaking, the Frobenius-norm-based methods achieve excellent performance in additive Gaussian noise, while their recovery severely degrades in impulsive noise. Although the algorithms using the l(p)-norm (0 < p < 2) or its variants can attain high restoration accuracy in the presence of gross errors, they are inferior to the Frobenius-norm-based methods when the noise is Gaussian-distributed. Therefore, an approach that is able to perform well in both Gaussian noise and impulsive noise is desired. In this work, we use a capped Frobenius norm to restrain outliers, which corresponds to a form of the truncated least-squares loss function. The upper bound of our capped Frobenius norm is automatically updated using normalized median absolute deviation during iterations. Therefore, it achieves better performance than the l(p)-norm with outlier-contaminated observations and attains comparable accuracy to the Frobenius norm without tuning parameter in Gaussian noise. We then adopt the half-quadratic theory to convert the nonconvex problem into a tractable multivariable problem, that is, convex optimization with respect to (w.r.t.) each individual variable. To address the resultant task, we exploit the proximal block coordinate descent ( PBCD) method and then establish the convergence of the suggested algorithm. Specifically, the objective function value is guaranteed to be convergent while the variable sequence has a subsequence converging to a critical point. Experimental results based on real-world images and videos exhibit the superiority of the devised approach over several state-of-the-art algorithms in terms of recovery performance. MATLAB code is available at https://github.com/Li-X-P/Codeof-Robust-Tensor-Completion.
引用
收藏
页码:9700 / 9712
页数:13
相关论文
共 50 条
  • [31] A weighted nuclear norm method for tensor completion
    College of Science, China Agricultural University, 100083 Beijing, China
    不详
    不详
    Int. J. Signal Process. Image Process. Pattern Recogn., 1 (1-12):
  • [32] The Twist Tensor Nuclear Norm for Video Completion
    Hu, Wenrui
    Tao, Dacheng
    Zhang, Wensheng
    Xie, Yuan
    Yang, Yehui
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2017, 28 (12) : 2961 - 2973
  • [33] Truncated Nuclear Norm Minimization for Tensor Completion
    Huang, Long-Ting
    So, H. C.
    Chen, Yuan
    Wang, Wen-Qin
    2014 IEEE 8TH SENSOR ARRAY AND MULTICHANNEL SIGNAL PROCESSING WORKSHOP (SAM), 2014, : 417 - 420
  • [34] LATENT SCHATTEN TT NORM FOR TENSOR COMPLETION
    Wang, Andong
    Song, Xulin
    Wu, Xiyin
    Lai, Zhihui
    Jin, Zhong
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 2922 - 2926
  • [35] Robust Low-Rank Tensor Completion Based on Tensor Ring Rank via,&epsilon
    Li, Xiao Peng
    So, Hing Cheung
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2021, 69 : 3685 - 3698
  • [36] Robust Matrix Completion via Joint Schatten p-Norm and lp-Norm Minimization
    Nie, Feiping
    Wang, Hua
    Cai, Xiao
    Huang, Heng
    Ding, Chris
    12TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2012), 2012, : 566 - 574
  • [37] Weighted tensor nuclear norm minimization for tensor completion using tensor-SVD
    Mu, Yang
    Wang, Ping
    Lu, Liangfu
    Zhang, Xuyun
    Qi, Lianyong
    PATTERN RECOGNITION LETTERS, 2020, 130 (130) : 4 - 11
  • [38] Sparse and Truncated Nuclear Norm Based Tensor Completion
    Han, Zi-Fa
    Leung, Chi-Sing
    Huang, Long-Ting
    So, Hing Cheung
    NEURAL PROCESSING LETTERS, 2017, 45 (03) : 729 - 743
  • [39] Sparse and Truncated Nuclear Norm Based Tensor Completion
    Zi-Fa Han
    Chi-Sing Leung
    Long-Ting Huang
    Hing Cheung So
    Neural Processing Letters, 2017, 45 : 729 - 743
  • [40] A Mixture of Nuclear Norm and Matrix Factorization for Tensor Completion
    Gao, Shangqi
    Fan, Qibin
    JOURNAL OF SCIENTIFIC COMPUTING, 2018, 75 (01) : 43 - 64