Fast Tucker Factorization for Large-Scale Tensor Completion

被引:8
|
作者
Lee, Dongha [1 ]
Lee, Jaehyung [1 ]
Yu, Hwanjo [1 ]
机构
[1] Pohang Univ Sci & Technol, Pohang, South Korea
关键词
tensor completion; Tucker factorization; coordinate descent; caching algorithm; disk-based data processing; DECOMPOSITIONS; MATRIX;
D O I
10.1109/ICDM.2018.00142
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Tensor completion is the task of completing multi-aspect data represented as a tensor by accurately predicting missing entries in the tensor. It is mainly solved by tensor factorization methods, and among them, Tucker factorization has attracted considerable interests due to its powerful ability to learn latent factors and even their interactions. Although several Tucker methods have been developed to reduce the memory and computational complexity, the state-of-the-art method still 1) generates redundant computations and 2) cannot factorize a large tensor that exceeds the size of memory. This paper proposes FTCOM, a fast and scalable Tucker factorization method for tensor completion. FTCOM performs element-wise updates for factor matrices based on coordinate descent, and adopts a novel caching algorithm which stores frequently-required intermediate data. It also uses a tensor file for disk-based data processing and loads only a small part of the tensor at a time into the memory. Experimental results show that FTCOM is much faster and more scalable compared to all other competitors. It significantly shortens the training time of Tucker factorization, especially on real-world tensors, and it can be executed on a billion-scale tensor which is bigger than the memory capacity within a single machine.
引用
收藏
页码:1098 / 1103
页数:6
相关论文
共 50 条
  • [31] A New Model for Tensor Completion: Smooth Convolutional Tensor Factorization
    Takayama, Hiromu
    Yokota, Tatsuya
    IEEE ACCESS, 2023, 11 : 67526 - 67539
  • [32] CP Tensor Factorization for Knowledge Graph Completion
    Luo, Yue
    Yang, Chunming
    Li, Bo
    Zhao, Xujian
    Zhang, Hui
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT I, 2022, 13368 : 240 - 254
  • [33] An Efficient Matrix Factorization Method for Tensor Completion
    Liu, Yuanyuan
    Shang, Fanhua
    IEEE SIGNAL PROCESSING LETTERS, 2013, 20 (04) : 307 - 310
  • [34] Inflationary tensor fossils in large-scale structure
    Dimastrongiovanni, Emanuela
    Fasiello, Matteo
    Jeong, Donghui
    Kamionkowski, Marc
    JOURNAL OF COSMOLOGY AND ASTROPARTICLE PHYSICS, 2014, (12):
  • [35] Stochastic Gradients for Large-Scale Tensor Decomposition
    Kolda, Tamara G.
    Hong, David
    SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE, 2020, 2 (04): : 1066 - 1095
  • [36] Tucker-1 Boolean Tensor Factorization with Quantum Annealers
    O'Malley, Daniel
    Djidjev, Hristo N.
    Alexandrov, Boian S.
    2020 INTERNATIONAL CONFERENCE ON REBOOTING COMPUTING (ICRC 2020), 2020, : 58 - 65
  • [37] Rank-Adaptive Tensor Completion Based on Tucker Decomposition
    Liu, Siqi
    Shi, Xiaoyu
    Liao, Qifeng
    ENTROPY, 2023, 25 (02)
  • [38] FACTORIZATION APPROACH TO LARGE-SCALE LINEAR-PROGRAMMING
    GRAVES, GW
    MCBRIDE, RD
    MATHEMATICAL PROGRAMMING, 1976, 10 (01) : 91 - 110
  • [39] Spectral factorization using FFTs for large-scale problems
    Moir, T. J.
    INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, 2015, 29 (08) : 954 - 970
  • [40] Factorization in large-scale many-body calculations
    Johnson, Calvin W.
    Ormand, W. Erich
    Krastev, Plamen G.
    COMPUTER PHYSICS COMMUNICATIONS, 2013, 184 (12) : 2761 - 2774