Efficient Parallel Sparse Symmetric Tucker Decomposition for High-Order Tensors

被引:0
|
作者
Shivakumar, Shruti [1 ]
Li, Jiajia [2 ,3 ]
Kannan, Ramakrishnan [4 ]
Aluru, Srinivas [1 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
[2] Pacific Northwest Natl Lab, Richland, WA 99352 USA
[3] William & Mary, Williamsburg, VA USA
[4] Oak Ridge Natl Lab, Oak Ridge, TN USA
关键词
D O I
暂无
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Tensor based methods are receiving renewed attention in recent years due to their prevalence in diverse real-world applications. There is considerable literature on tensor representations and algorithms for tensor decompositions, both for dense and sparse tensors. Many applications in hypergraph analytics, machine learning, psychometry, and signal processing result in tensors that are both sparse and symmetric, making it an important class for further study. Similar to the critical Tensor Times Matrix chain operation (TTMc) in general sparse tensors, the Sparse Symmetric Tensor Times Same Matrix chain (S-3 TTMc) operation is compute and memory intensive due to high tensor order and the associated factorial explosion in the number of non-zeros. In this work, we present a novel compressed storage format CSS for sparse symmetric tensors, along with an efficient parallel algorithm for the S-3 TTMc operation. We theoretically establish that S3TTMc on CSS achieves a better memory versus run-time trade-off compared to state-of-the-art implementations. We demonstrate experimental findings that confirm these results and achieve up to 2:9x speedup on synthetic and real datasets.
引用
收藏
页码:193 / 204
页数:12
相关论文
共 50 条
  • [1] High Performance Parallel Algorithms for the Tucker Decomposition of Sparse Tensors
    Kaya, Oguz
    Ucar, Bora
    [J]. PROCEEDINGS 45TH INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING - ICPP 2016, 2016, : 103 - 112
  • [2] eOTD: An Efficient Online Tucker Decomposition for Higher Order Tensors
    Xiao, Houping
    Wang, Fei
    Ma, Fenglong
    Gao, Jing
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2018, : 1326 - 1331
  • [3] Fast and memory-efficient algorithms for high-order Tucker decomposition
    Jiyuan Zhang
    Jinoh Oh
    Kijung Shin
    Evangelos E. Papalexakis
    Christos Faloutsos
    Hwanjo Yu
    [J]. Knowledge and Information Systems, 2020, 62 : 2765 - 2794
  • [4] Fast and memory-efficient algorithms for high-order Tucker decomposition
    Zhang, Jiyuan
    Oh, Jinoh
    Shin, Kijung
    Papalexakis, Evangelos E.
    Faloutsos, Christos
    Yu, Hwanjo
    [J]. KNOWLEDGE AND INFORMATION SYSTEMS, 2020, 62 (07) : 2765 - 2794
  • [5] On Optimizing Distributed Tucker Decomposition for Sparse Tensors
    Chakaravarthy, Venkatesan T.
    Choi, Jee W.
    Joseph, Douglas J.
    Murali, Prakash
    Pandian, Shivmaran S.
    Sabharwal, Yogish
    Sreedhar, Dheeraj
    [J]. INTERNATIONAL CONFERENCE ON SUPERCOMPUTING (ICS 2018), 2018, : 374 - 384
  • [6] Accelerating the Tucker Decomposition with Compressed Sparse Tensors
    Smith, Shaden
    Karypis, George
    [J]. EURO-PAR 2017: PARALLEL PROCESSING, 2017, 10417 : 653 - 668
  • [7] Sparse Symmetric Format for Tucker Decomposition
    Shivakumar, Shruti
    Li, Jiajia
    Kannan, Ramakrishnan
    Aluru, Srinivas
    [J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2023, 34 (06) : 1743 - 1756
  • [8] Symmetric rank-1 approximation of symmetric high-order tensors
    Wu, Leqin
    Liu, Xin
    Wen, Zaiwen
    [J]. OPTIMIZATION METHODS & SOFTWARE, 2020, 35 (02): : 416 - 438
  • [9] S-HOT: Scalable High-Order Tucker Decomposition
    Oh, Jinoh
    Shin, Kijung
    Papalexakis, Evangelos E.
    Faloutsos, Christos
    Yu, Hwanjo
    [J]. WSDM'17: PROCEEDINGS OF THE TENTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, 2017, : 761 - 770
  • [10] A sparse rank-1 approximation algorithm for high-order tensors
    Wang, Yiju
    Dong, Manman
    Xu, Yi
    [J]. APPLIED MATHEMATICS LETTERS, 2020, 102