Efficient Alternating Least Squares Algorithms for Low Multilinear Rank Approximation of Tensors

被引:8
|
作者
Xiao, Chuanfu [1 ,2 ]
Yang, Chao [1 ,2 ,3 ]
Li, Min [4 ]
机构
[1] Peking Univ, Sch Math Sci, CAPT, Beijing 100871, Peoples R China
[2] Peking Univ, Sch Math Sci, CCSE, Beijing 100871, Peoples R China
[3] Peking Univ, Natl Engn Lab Big Data Anal & Applicat, Beijing 100871, Peoples R China
[4] Chinese Acad Sci, Inst Software, Beijing 100190, Peoples R China
关键词
Low multilinear rank approximation; Truncated Tucker decomposition; Alternating least squares; Parallelization; PRINCIPAL-COMPONENTS; DIAGONALIZATION; DIMENSIONALITY; DECOMPOSITION;
D O I
10.1007/s10915-021-01493-0
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
The low multilinear rank approximation, also known as the truncated Tucker decomposition, has been extensively utilized in many applications that involve higher-order tensors. Popular methods for low multilinear rank approximation usually rely directly on matrix SVD, therefore often suffer from the notorious intermediate data explosion issue and are not easy to parallelize, especially when the input tensor is large. In this paper, we propose a new class of truncated HOSVD algorithms based on alternating least squares (ALS) for efficiently computing the low multilinear rank approximation of tensors. The proposed ALS-based approaches are able to eliminate the redundant computations of the singular vectors of intermediate matrices and are therefore free of data explosion. Also, the new methods are more flexible with adjustable convergence tolerance and are intrinsically parallelizable on high-performance computers. Theoretical analysis reveals that the ALS iteration in the proposed algorithms is q-linear convergent with a relatively wide convergence region. Numerical experiments with large-scale tensors from both synthetic and real-world applications demonstrate that ALS-based methods can substantially reduce the total cost of the original ones and are highly scalable for parallel computing.
引用
收藏
页数:25
相关论文
共 50 条
  • [21] Alternating Iteratively Reweighted Least Squares Minimization for Low-Rank Matrix Factorization
    Giampouras, Paris V.
    Rontogiannis, Athanasios A.
    Koutroumbas, Konstantinos D.
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2019, 67 (02) : 490 - 503
  • [22] Matrix completion and low-rank SVD via fast alternating least squares
    Hastie, Trevor
    Mazumder, Rahul
    Lee, Jason D.
    Zadeh, Reza
    Journal of Machine Learning Research, 2015, 16 : 3367 - 3402
  • [23] Matrix Completion and Low-Rank SVD via Fast Alternating Least Squares
    Hastie, Trevor
    Mazumder, Rahul
    Lee, Jason D.
    Zadeh, Reza
    JOURNAL OF MACHINE LEARNING RESEARCH, 2015, 16 : 3367 - 3402
  • [24] On global convergence of alternating least squares for tensor approximation
    Yuning Yang
    Computational Optimization and Applications, 2023, 84 : 509 - 529
  • [25] ACCURATE LOW-RANK APPROXIMATIONS VIA A FEW ITERATIONS OF ALTERNATING LEAST SQUARES
    Szlam, Arthur
    Tulloch, Andrew
    Tygert, Mark
    SIAM JOURNAL ON MATRIX ANALYSIS AND APPLICATIONS, 2017, 38 (02) : 425 - 433
  • [26] On global convergence of alternating least squares for tensor approximation
    Yang, Yuning
    COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 2023, 84 (02) : 509 - 529
  • [27] Recursive Least-Squares Algorithms for the Identification of Low-Rank Systems
    Elisei-Iliescu, Camelia
    Paleologu, Constantin
    Benesty, Jacob
    Stanciu, Cristian
    Anghel, Cristian
    Ciochina, Silviu
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2019, 27 (05) : 903 - 918
  • [28] CP decomposition for tensors via alternating least squares with QR decomposition
    Minster, Rachel
    Viviano, Irina
    Liu, Xiaotian
    Ballard, Grey
    NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, 2023, 30 (06)
  • [29] Robust Simultaneous Low Rank Approximation of Tensors
    Inoue, Kohei
    Hara, Kenji
    Urahama, Kiichi
    ADVANCES IN IMAGE AND VIDEO TECHNOLOGY, PROCEEDINGS, 2009, 5414 : 574 - 584
  • [30] A Modified Particle Swarm Optimization Algorithm for the Best Low Multilinear Rank Approximation of Higher-Order Tensors
    Borckmans, Pierre B.
    Ishteva, Mariya
    Absil, Pierre-Antoine
    SWARM INTELLIGENCE, 2010, 6234 : 13 - 23