On Optimizing Distributed Tucker Decomposition for Sparse Tensors

被引:12
|
作者
Chakaravarthy, Venkatesan T. [1 ]
Choi, Jee W. [1 ]
Joseph, Douglas J. [1 ]
Murali, Prakash [1 ,2 ]
Pandian, Shivmaran S. [1 ]
Sabharwal, Yogish [1 ]
Sreedhar, Dheeraj [1 ]
机构
[1] IBM Res, Armonk, NY 10504 USA
[2] Princeton Univ, Princeton, NJ 08544 USA
关键词
Tensor decompositions; tensor distribution schemes;
D O I
10.1145/3205289.3205315
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The Tucker decomposition generalizes the notion of Singular Value Decomposition (SVD) to tensors, the higher dimensional analogues of matrices. We study the problem of constructing the Tucker decomposition of sparse tensors on distributed memory systems via the HOOI procedure, a popular iterative method. The scheme used for distributing the input tensor among the processors (MPI ranks) critically influences the HOOI execution time. Prior work has proposed different distribution schemes: an offline scheme based on sophisticated hypergraph partitioning method and simple, lightweight alternatives that can be used real-time. While the hypergraph based scheme typically results in faster HOOI execution time, being complex, the time taken for determining the distribution is an order of magnitude higher than the execution time of a single HOOI iteration. Our main contribution is a lightweight distribution scheme, which achieves the best of both worlds. We show that the scheme is near-optimal on certain fundamental metrics associated with the HOOI procedure and as a result, near-optimal on the computational load (FLOPs). Though the scheme may incur higher communication volume, the computation time is the dominant factor and as the result, the scheme achieves better performance on the overall HOOI execution time. Our experimental evaluation on large real-life tensors (having up to 4 billion elements) shows that the scheme outperforms the prior schemes on the HOOI execution time by a factor of up to 3x. On the other hand, its distribution time is comparable to the prior lightweight schemes and is typically lesser than the execution time of a single HOOI iteration.
引用
收藏
页码:374 / 384
页数:11
相关论文
共 50 条
  • [1] On Optimizing Distributed Tucker Decomposition for Dense Tensors
    Chakaravarthy, Venkatesan T.
    Choi, Jee W.
    Joseph, Douglas J.
    Liu, Xing
    Murali, Prakash
    Sabharwal, Yogish
    Sreedhar, Dheeraj
    [J]. 2017 31ST IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM (IPDPS), 2017, : 1038 - 1047
  • [2] Accelerating the Tucker Decomposition with Compressed Sparse Tensors
    Smith, Shaden
    Karypis, George
    [J]. EURO-PAR 2017: PARALLEL PROCESSING, 2017, 10417 : 653 - 668
  • [3] High Performance Parallel Algorithms for the Tucker Decomposition of Sparse Tensors
    Kaya, Oguz
    Ucar, Bora
    [J]. PROCEEDINGS 45TH INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING - ICPP 2016, 2016, : 103 - 112
  • [4] On Optimizing Distributed Non-negative Tucker Decomposition
    Chakaravarthy, Venkatesan T.
    Pandian, Shivmaran S.
    Raje, Saurabh
    Sabharwal, Yogish
    [J]. INTERNATIONAL CONFERENCE ON SUPERCOMPUTING (ICS 2019), 2019, : 238 - 249
  • [5] Efficient Parallel Sparse Symmetric Tucker Decomposition for High-Order Tensors
    Shivakumar, Shruti
    Li, Jiajia
    Kannan, Ramakrishnan
    Aluru, Srinivas
    [J]. PROCEEDINGS OF THE 2021 SIAM CONFERENCE ON APPLIED AND COMPUTATIONAL DISCRETE ALGORITHMS, ACDA21, 2021, : 193 - 204
  • [6] Separation of Composite Tensors with Sparse Tucker Representations
    Prater-Bennette, Ashley
    Carr, Kenneth Theodore
    [J]. BIG DATA: LEARNING, ANALYTICS, AND APPLICATIONS, 2019, 10989
  • [7] Static and Streaming Tucker Decomposition for Dense Tensors
    Jang, Jun-Gi
    Kang, U.
    [J]. ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2023, 17 (05)
  • [8] Scalable Tucker Factorization for Sparse Tensors - Algorithms and Discoveries
    Oh, Sejoon
    Park, Namyong
    Sael, Lee
    Kang, U.
    [J]. 2018 IEEE 34TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE), 2018, : 1120 - 1131
  • [9] Nonparametric Decomposition of Sparse Tensors
    Tillinghast, Conor
    Zhe, Shandian
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139 : 7313 - 7324
  • [10] Sparse Symmetric Format for Tucker Decomposition
    Shivakumar, Shruti
    Li, Jiajia
    Kannan, Ramakrishnan
    Aluru, Srinivas
    [J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2023, 34 (06) : 1743 - 1756