SpDRAM: Efficient In-DRAM Acceleration of Sparse Matrix-Vector Multiplication

被引:0
|
作者
Kang, Jieui [1 ]
Choi, Soeun [1 ]
Lee, Eunjin [1 ]
Sim, Jaehyeong [2 ]
机构
[1] Ewha Womans Univ, Artificial Intelligence Convergence, Seoul 03760, South Korea
[2] Ewha Womans Univ, Dept Comp Sci & Engn, Seoul 03760, South Korea
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Random access memory; Sparse matrices; Computer architecture; Logic; Vectors; Turning; System-on-chip; Space exploration; Sorting; SRAM cells; Processing-in-memory; SpMV; sparsity; DRAM; ARCHITECTURE;
D O I
10.1109/ACCESS.2024.3505622
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We introduce novel sparsity-aware in-DRAM matrix mapping techniques and a correspondingDRAM-based acceleration framework, termedSpDRAM, which utilizes a triple row activation schemeto efficiently handle sparse matrix-vector multiplication (SpMV). We found that reducing operationsby sparsity relies heavily on how matrices are mapped into DRAM banks, which operate row byrow. These banks operate row by row. From this insight, we developed two distinct matrix mappingtechniques aimed at maximizing the reduction of row operations with minimal design overhead: Output-aware Matrix Permutation (OMP) and Zero-aware Matrix Column Sorting (ZMCS). Additionally,we propose a Multiplication Deferring (MD) scheme that leverages the prevalent bit-level sparsity inmatrix values to decrease the effective bit-width required for in-bank multiplication operations. Evaluationresults demonstrate that the combination of our in-DRAM acceleration methods outperforms the latestDRAM-based PIM accelerator for SpMV, achieving a performance increase of up to 7.54xand a 22.4ximprovement in energy efficiency in a wide range of SpMV tasks
引用
收藏
页码:176009 / 176021
页数:13
相关论文
共 50 条
  • [31] Sparse matrix-vector multiplication on network-on-chip
    Sun, C-C
    Goetze, J.
    Jheng, H-Y
    Ruan, S-J
    ADVANCES IN RADIO SCIENCE, 2010, 8 : 289 - 294
  • [32] Communication balancing in parallel sparse matrix-vector multiplication
    Bisseling, RH
    Meesen, W
    ELECTRONIC TRANSACTIONS ON NUMERICAL ANALYSIS, 2005, 21 : 47 - 65
  • [33] Optimization by Runtime Specialization for Sparse Matrix-Vector Multiplication
    Kamin, Sam
    Garzaran, Maria Jesus
    Aktemur, Baris
    Xu, Danqing
    Yilmaz, Buse
    Chen, Zhongbo
    ACM SIGPLAN NOTICES, 2015, 50 (03) : 93 - 102
  • [34] A New Method of Sparse Matrix-Vector Multiplication on GPU
    Huan, Gao
    Qian, Zhang
    PROCEEDINGS OF 2012 2ND INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND NETWORK TECHNOLOGY (ICCSNT 2012), 2012, : 954 - 958
  • [35] A new approach for accelerating the sparse matrix-vector multiplication
    Tvrdik, Pavel
    Simecek, Ivan
    SYNASC 2006: EIGHTH INTERNATIONAL SYMPOSIUM ON SYMBOLIC AND NUMERIC ALGORITHMS FOR SCIENTIFIC COMPUTING, PROCEEDINGS, 2007, : 156 - +
  • [36] Parallel Sparse Matrix-Vector Multiplication Using Accelerators
    Maeda, Hiroshi
    Takahashi, Daisuke
    COMPUTATIONAL SCIENCE AND ITS APPLICATIONS - ICCSA 2016, PT II, 2016, 9787 : 3 - 18
  • [37] Adaptive diagonal sparse matrix-vector multiplication on GPU
    Gao, Jiaquan
    Xia, Yifei
    Yin, Renjie
    He, Guixia
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2021, 157 : 287 - 302
  • [38] No Zero Padded Sparse Matrix-Vector Multiplication on FPGAs
    Huang, Jiasen
    Ren, Junyan
    Yin, Wenbo
    Wang, Lingli
    PROCEEDINGS OF THE 2014 INTERNATIONAL CONFERENCE ON FIELD-PROGRAMMABLE TECHNOLOGY (FPT), 2014, : 290 - 291
  • [39] Sparse Binary Matrix-Vector Multiplication on Neuromorphic Computers
    Schuman, Catherine D.
    Kay, Bill
    Date, Prasanna
    Kannan, Ramakrishnan
    Sao, Piyush
    Potok, Thomas E.
    2021 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS (IPDPSW), 2021, : 308 - 311
  • [40] Sparse Matrix-Vector Multiplication on a Reconfigurable Supercomputer with Application
    Dubois, David
    Dubois, Andrew
    Boorman, Thomas
    Connor, Carolyn
    Poole, Steve
    ACM TRANSACTIONS ON RECONFIGURABLE TECHNOLOGY AND SYSTEMS, 2010, 3 (01)