A general multi-factor norm based low-rank tensor completion framework

被引:1
|
作者
Tian, Jialue [1 ]
Zhu, Yulian [2 ]
Liu, Jiahui [1 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Coll Artificial Intelligence, Nanjing 211106, Jiangsu, Peoples R China
[2] Nanjing Univ Aeronaut & Astronaut, Fundamental Expt Teaching Dept, Nanjing 211106, Jiangsu, Peoples R China
关键词
Tensor completion; Tensor factorization; Unitary Transformed Tensor Multi-Factor Norm (UTTMFN); Tensor Nuclear Norm (TNN); Nonconvex optimization; COLOR IMAGE; FACTORIZATION; MATRIX; APPROXIMATION; MINIMIZATION; ALGORITHM;
D O I
10.1007/s10489-023-04477-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Low-rank tensor completion aims to recover the missing entries of the tensor from its partially observed data by using the low-rank property of the tensor. Since rank minimization is an NP-hard problem, the convex surrogate nuclear norm is usually used to replace the rank norm and has obtained promising results. However, the nuclear norm is not a tight envelope of the rank norm and usually over-penalizes large singular values. In this paper, inspired by the effectiveness of the matrix Schatten-q norm, which is a tighter approximation of rank norm when 0 < q < 1, we generalize the matrix Schatten-q norm to tensor case and propose a Unitary Transformed Tensor Schatten-q Norm (UTT-S-q) with an arbitrary unitary transform matrix. More importantly, the factor tensor norm surrogate theorem is derived. We prove large-scale UTT-S-q norm (which is nonconvex and not tractable when 0 < q < 1) is equivalent to minimizing the weighted sum formulation of multiple small-scale UTT-S-qi(with different q(i) and q(i) >= 1). Based on this equivalence, we propose a low-rank tensor completion framework using Unitary Transformed Tensor Multi-Factor Norm (UTTMFN) penalty. The optimization problem is solved using the Alternating Direction Method of Multipliers (ADMM) with the proof of convergence. Experimental results on synthetic data, images and videos show that the proposed UTTMFN can achieve competitive results with the state-of-the-art methods for tensor completion.
引用
收藏
页码:19317 / 19337
页数:21
相关论文
共 50 条
  • [1] A general multi-factor norm based low-rank tensor completion framework
    Jialue Tian
    Yulian Zhu
    Jiahui Liu
    Applied Intelligence, 2023, 53 : 19317 - 19337
  • [2] A dual framework for low-rank tensor completion
    Nimishakavi, Madhav
    Jawanpuria, Pratik
    Mishra, Bamdev
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [3] LRTCFPan: Low-Rank Tensor Completion Based Framework for Pansharpening
    Wu, Zhong-Cheng
    Huang, Ting-Zhu
    Deng, Liang-Jian
    Huang, Jie
    Chanussot, Jocelyn
    Vivone, Gemine
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 1640 - 1655
  • [4] Low-Rank Tensor Completion by Sum of Tensor Nuclear Norm Minimization
    Su, Yaru
    Wu, Xiaohui
    Liu, Wenxi
    IEEE ACCESS, 2019, 7 : 134943 - 134953
  • [5] Low-Rank Tensor Completion via Tensor Joint Rank With Logarithmic Composite Norm
    Zhang, Hongbing
    Zheng, Bing
    NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, 2025, 32 (02)
  • [6] Low-Rank Tensor Completion by Truncated Nuclear Norm Regularization
    Xue, Shengke
    Qiu, Wenyuan
    Liu, Fan
    Jin, Xinyu
    2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 2600 - 2605
  • [7] Mixed norm regularized models for low-rank tensor completion
    Bu, Yuanyang
    Zhao, Yongqiang
    Chan, Jonathan Cheung-Wai
    INFORMATION SCIENCES, 2024, 670
  • [8] Tensor p-shrinkage nuclear norm for low-rank tensor completion
    Liu, Chunsheng
    Shan, Hong
    Chen, Chunlei
    NEUROCOMPUTING, 2020, 387 : 255 - 267
  • [9] Subspace screening rule for low-rank tensor completion with tensor nuclear norm
    Chen, Haotian
    Xu, Yitian
    NEUROCOMPUTING, 2025, 624
  • [10] Unifying tensor factorization and tensor nuclear norm approaches for low-rank tensor completion
    Du, Shiqiang
    Xiao, Qingjiang
    Shi, Yuqing
    Cucchiara, Rita
    Ma, Yide
    NEUROCOMPUTING, 2021, 458 : 204 - 218