Adaptive weighting function for weighted nuclear norm based matrix/tensor completion

被引:2
|
作者
Zhao, Qian [1 ]
Lin, Yuji [1 ]
Wang, Fengxingyu [1 ]
Meng, Deyu [1 ,2 ,3 ,4 ]
机构
[1] Xi An Jiao Tong Univ, Sch Math & Stat, Xian 710049, Shaanxi, Peoples R China
[2] Xi An Jiao Tong Univ, Key Lab Intelligent Networks & Network Secur, Minist Educ, Xian 710049, Shaanxi, Peoples R China
[3] Pazhou Lab Huangpu, Guangzhou 510555, Guangdong, Peoples R China
[4] Macau Univ Sci & Technol, Macao Inst Syst Engn, Taipa, Macao, Peoples R China
关键词
Low-rankness; Weighted nuclear norm; Adaptive weighting function; Matrix; tensor completion; MATRIX FACTORIZATION; TENSOR COMPLETION; LEAST-SQUARES; RANK; ALGORITHM; IMAGE; REGULARIZATION; APPROXIMATION; SPARSITY;
D O I
10.1007/s13042-023-01935-1
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Weighted nuclear norm provides a simple yet powerful tool to characterize the intrinsic low-rank structure of a matrix, and has been successfully applied to the matrix completion problem. However, in previous studies, the weighting functions to calculate the weights are fixed beforehand, and do not change during the whole iterative process. Such predefined weighting functions may not be able to precisely characterize the complicated structure underlying the observed data matrix, especially in the dynamic estimation process, and thus limits its performance. To address this issue, we propose a strategy of adaptive weighting function, for low-rank matrix/tensor completion. Specifically, we first parameterize the weighting function as a simple yet flexible neural network, that can approximate a wide range of monotonic decreasing functions. Then we propose an effective strategy, by virtue of the bi-level optimization technique, to adapt the weighting function, and incorporate this strategy to the alternating direction method of multipliers for solving low-rank matrix and tensor completion problems. Our empirical studies on a series of synthetic and real data have verified the effectiveness of the proposed approach, as compared with representative low-rank matrix and tensor completion methods.
引用
收藏
页码:697 / 718
页数:22
相关论文
共 50 条
  • [1] Adaptive weighting function for weighted nuclear norm based matrix/tensor completion
    Qian Zhao
    Yuji Lin
    Fengxingyu Wang
    Deyu Meng
    [J]. International Journal of Machine Learning and Cybernetics, 2024, 15 : 697 - 718
  • [2] Traffic Matrix Completion by Weighted Tensor Nuclear Norm Minimization
    Miyata, Takamichi
    [J]. 2023 IEEE 20TH CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE, CCNC, 2023,
  • [3] A weighted nuclear norm method for tensor completion
    College of Science, China Agricultural University, 100083 Beijing, China
    不详
    不详
    [J]. Int. J. Signal Process. Image Process. Pattern Recogn., 1 (1-12):
  • [4] Traffic matrix completion by weighted tensor nuclear norm minimization and time slicing
    Miyata, Takamichi
    [J]. IEICE NONLINEAR THEORY AND ITS APPLICATIONS, 2024, 15 (02): : 311 - 323
  • [5] A Non-Local Tensor Completion Algorithm Based on Weighted Tensor Nuclear Norm
    Wang, Wenzhe
    Zheng, Jingjing
    Zhao, Li
    Chen, Huiling
    Zhang, Xiaoqin
    [J]. ELECTRONICS, 2022, 11 (19)
  • [6] A Mixture of Nuclear Norm and Matrix Factorization for Tensor Completion
    Gao, Shangqi
    Fan, Qibin
    [J]. JOURNAL OF SCIENTIFIC COMPUTING, 2018, 75 (01) : 43 - 64
  • [7] A Mixture of Nuclear Norm and Matrix Factorization for Tensor Completion
    Shangqi Gao
    Qibin Fan
    [J]. Journal of Scientific Computing, 2018, 75 : 43 - 64
  • [8] Weighted tensor nuclear norm minimization for tensor completion using tensor-SVD
    Mu, Yang
    Wang, Ping
    Lu, Liangfu
    Zhang, Xuyun
    Qi, Lianyong
    [J]. PATTERN RECOGNITION LETTERS, 2020, 130 : 4 - 11
  • [9] Sparse and Truncated Nuclear Norm Based Tensor Completion
    Han, Zi-Fa
    Leung, Chi-Sing
    Huang, Long-Ting
    So, Hing Cheung
    [J]. NEURAL PROCESSING LETTERS, 2017, 45 (03) : 729 - 743
  • [10] Sparse and Truncated Nuclear Norm Based Tensor Completion
    Zi-Fa Han
    Chi-Sing Leung
    Long-Ting Huang
    Hing Cheung So
    [J]. Neural Processing Letters, 2017, 45 : 729 - 743