Efficient Alternating Least Squares Algorithms for Low Multilinear Rank Approximation of Tensors

被引:8
|
作者
Xiao, Chuanfu [1 ,2 ]
Yang, Chao [1 ,2 ,3 ]
Li, Min [4 ]
机构
[1] Peking Univ, Sch Math Sci, CAPT, Beijing 100871, Peoples R China
[2] Peking Univ, Sch Math Sci, CCSE, Beijing 100871, Peoples R China
[3] Peking Univ, Natl Engn Lab Big Data Anal & Applicat, Beijing 100871, Peoples R China
[4] Chinese Acad Sci, Inst Software, Beijing 100190, Peoples R China
关键词
Low multilinear rank approximation; Truncated Tucker decomposition; Alternating least squares; Parallelization; PRINCIPAL-COMPONENTS; DIAGONALIZATION; DIMENSIONALITY; DECOMPOSITION;
D O I
10.1007/s10915-021-01493-0
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
The low multilinear rank approximation, also known as the truncated Tucker decomposition, has been extensively utilized in many applications that involve higher-order tensors. Popular methods for low multilinear rank approximation usually rely directly on matrix SVD, therefore often suffer from the notorious intermediate data explosion issue and are not easy to parallelize, especially when the input tensor is large. In this paper, we propose a new class of truncated HOSVD algorithms based on alternating least squares (ALS) for efficiently computing the low multilinear rank approximation of tensors. The proposed ALS-based approaches are able to eliminate the redundant computations of the singular vectors of intermediate matrices and are therefore free of data explosion. Also, the new methods are more flexible with adjustable convergence tolerance and are intrinsically parallelizable on high-performance computers. Theoretical analysis reveals that the ALS iteration in the proposed algorithms is q-linear convergent with a relatively wide convergence region. Numerical experiments with large-scale tensors from both synthetic and real-world applications demonstrate that ALS-based methods can substantially reduce the total cost of the original ones and are highly scalable for parallel computing.
引用
收藏
页数:25
相关论文
共 50 条
  • [41] An Efficient Low-Rank Tensors Representation for Algorithms in Complex Probabilistic Graphical Models
    Ducamp, Gaspard
    Bonnard, Philippe
    Nouy, Anthony
    Wuillemin, Pierre-Henri
    INTERNATIONAL CONFERENCE ON PROBABILISTIC GRAPHICAL MODELS, VOL 138, 2020, 138 : 173 - 184
  • [42] ON BEST LOW RANK APPROXIMATION OF POSITIVE DEFINITE TENSORS
    Evert, Eric
    De Lathauwer, Lieven
    SIAM JOURNAL ON MATRIX ANALYSIS AND APPLICATIONS, 2023, 44 (02) : 867 - 893
  • [43] Unified Low-Rank Matrix Estimate via Penalized Matrix Least Squares Approximation
    Chang, Xiangyu
    Zhong, Yan
    Wang, Yao
    Lin, Shaobo
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (02) : 474 - 485
  • [44] Using structured low-rank tensors for multilinear modeling of building systems
    Schnelle, Leona
    Heinrich, Johannes
    Schneidewind, Joel
    Jacob, Dirk
    Lichtenberg, Gerwald
    IFAC PAPERSONLINE, 2023, 56 (02): : 7306 - 7311
  • [45] RIEMANNIAN PRECONDITIONED COORDINATE DESCENT FOR LOW MULTILINEAR RANK APPROXIMATION
    Hamed, Mohammad
    Hosseini, Reshad
    SIAM JOURNAL ON MATRIX ANALYSIS AND APPLICATIONS, 2024, 45 (02) : 1054 - 1075
  • [46] AN ALTERNATING LEAST-SQUARES METHOD FOR THE WEIGHTED APPROXIMATION OF A SYMMETRICAL MATRIX
    TENBERGE, JMF
    KIERS, HAL
    PSYCHOMETRIKA, 1993, 58 (01) : 115 - 118
  • [47] LOCAL CONVERGENCE OF THE ALTERNATING LEAST SQUARES ALGORITHM FOR CANONICAL TENSOR APPROXIMATION
    Uschmajew, Andre
    SIAM JOURNAL ON MATRIX ANALYSIS AND APPLICATIONS, 2012, 33 (02) : 639 - 652
  • [48] Sparse Least Squares Low Rank Kernel Machines
    Xu, Di
    Fang, Manjing
    Hong, Xia
    Gao, Junbin
    NEURAL INFORMATION PROCESSING (ICONIP 2019), PT II, 2019, 11954 : 395 - 406
  • [49] Low-Rank Approximation: Algorithms, Implementation, Approximation
    Khoromskij, Boris N.
    SIAM REVIEW, 2021, 63 (04) : 870 - 871
  • [50] Least Squares Policy Evaluation Algorithms with Linear Function Approximation
    A. NediĆ
    D. P. Bertsekas
    Discrete Event Dynamic Systems, 2003, 13 : 79 - 110