Efficient and Scalable Computations with Sparse Tensors

被引:0
|
作者
Baskaran, Muthu [1 ]
Meister, Benoit [1 ]
Vasilache, Nicolas [1 ]
Lethin, Richard [1 ]
机构
[1] Reservoir Labs Inc, New York, NY 10012 USA
关键词
D O I
暂无
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
For applications that deal with large amounts of high dimensional multi-aspect data, it becomes natural to represent such data as tensors or multi-way arrays. Multi-linear algebraic computations such as tensor decompositions are performed for summarization and analysis of such data. Their use in real-world applications can span across domains such as signal processing, data mining, computer vision, and graph analysis. The major challenges with applying tensor decompositions in real-world applications are (1) dealing with large-scale high dimensional data and (2) dealing with sparse data. In this paper, we address these challenges in applying tensor decompositions in real data analytic applications. We describe new sparse tensor storage formats that provide storage benefits and are flexible and efficient for performing tensor computations. Further, we propose an optimization that improves data reuse and reduces redundant or unnecessary computations in tensor decomposition algorithms. Furthermore, we couple our data reuse optimization and the benefits of our sparse tensor storage formats to provide a memory-efficient scalable solution for handling large-scale sparse tensor computations. We demonstrate improved performance and address memory scalability using our techniques on both synthetic small data sets and large-scale sparse real data sets.
引用
收藏
页数:6
相关论文
共 50 条
  • [41] Performance Modeling and Mapping of Sparse Computations
    Bliss, Nadya T.
    Mohindra, Sanjeev
    O'Reilly, Una-May
    [J]. PROCEEDINGS OF THE HPCMP USERS GROUP CONFERENCE 2008, 2008, : 448 - +
  • [42] Parallelizing and Optimizing Sparse Tensor Computations
    Baskaran, Muthu Manikandan
    Meister, Benoit
    Lethin, Richard
    [J]. PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON SUPERCOMPUTING, (ICS'14), 2014, : 179 - 179
  • [43] ADVANCED COMPILER OPTIMIZATIONS FOR SPARSE COMPUTATIONS
    BIK, AJC
    WIJSHOFF, HAG
    [J]. JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 1995, 31 (01) : 14 - 24
  • [44] Sparse matrix computations on reconfigurable hardware
    Prasanna, Viktor K.
    Morris, Gerald R.
    [J]. COMPUTER, 2007, 40 (03) : 58 - +
  • [45] Sparse Matrix Computations on Clusters with GPGPUs
    Cardellini, Valeria
    Fanfarillo, Alessandro
    Filippone, Salvatore
    [J]. 2014 INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING & SIMULATION (HPCS), 2014, : 23 - 30
  • [46] Avoiding communication in sparse matrix computations
    Demmel, James
    Hoemmen, Mark
    Mohiyuddin, Marghoob
    Yelick, Katherine
    [J]. 2008 IEEE INTERNATIONAL SYMPOSIUM ON PARALLEL & DISTRIBUTED PROCESSING, VOLS 1-8, 2008, : 1784 - 1795
  • [47] Bounded Verification of Sparse Matrix Computations
    Dyer, Tristan
    Altuntas, Alper
    Baugh, John
    [J]. PROCEEDINGS OF 2019 IEEE/ACM 3RD INTERNATIONAL WORKSHOP ON SOFTWARE CORRECTNESS FOR HPC APPLICATIONS (CORRECTNESS 2019), 2019, : 36 - 43
  • [48] POSTER: Optimizing Sparse Computations Jointly
    Cheshmi, Kazem
    Strout, Michelle Mills
    Dehnavi, Maryam Mehri
    [J]. PPOPP'22: PROCEEDINGS OF THE 27TH ACM SIGPLAN SYMPOSIUM ON PRINCIPLES AND PRACTICE OF PARALLEL PROGRAMMING, 2022, : 459 - 460
  • [49] Advanced Compiler Optimizations for Sparse Computations
    [J]. J Parallel Distrib Comput, (14):
  • [50] ALTO: Adaptive Linearized Storage of Sparse Tensors
    Helal, Ahmed E.
    Laukemann, Jan
    Checconi, Fabio
    Tithi, Jesmin Jahan
    Ranadive, Teresa
    Petrini, Fabrizio
    Choi, Jeewhan
    [J]. PROCEEDINGS OF THE 2021 ACM INTERNATIONAL CONFERENCE ON SUPERCOMPUTING, ICS 2021, 2021, : 404 - 416