SpDRAM: Efficient In-DRAM Acceleration of Sparse Matrix-Vector Multiplication

被引:0
|
作者
Kang, Jieui [1 ]
Choi, Soeun [1 ]
Lee, Eunjin [1 ]
Sim, Jaehyeong [2 ]
机构
[1] Ewha Womans Univ, Artificial Intelligence Convergence, Seoul 03760, South Korea
[2] Ewha Womans Univ, Dept Comp Sci & Engn, Seoul 03760, South Korea
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Random access memory; Sparse matrices; Computer architecture; Logic; Vectors; Turning; System-on-chip; Space exploration; Sorting; SRAM cells; Processing-in-memory; SpMV; sparsity; DRAM; ARCHITECTURE;
D O I
10.1109/ACCESS.2024.3505622
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We introduce novel sparsity-aware in-DRAM matrix mapping techniques and a correspondingDRAM-based acceleration framework, termedSpDRAM, which utilizes a triple row activation schemeto efficiently handle sparse matrix-vector multiplication (SpMV). We found that reducing operationsby sparsity relies heavily on how matrices are mapped into DRAM banks, which operate row byrow. These banks operate row by row. From this insight, we developed two distinct matrix mappingtechniques aimed at maximizing the reduction of row operations with minimal design overhead: Output-aware Matrix Permutation (OMP) and Zero-aware Matrix Column Sorting (ZMCS). Additionally,we propose a Multiplication Deferring (MD) scheme that leverages the prevalent bit-level sparsity inmatrix values to decrease the effective bit-width required for in-bank multiplication operations. Evaluationresults demonstrate that the combination of our in-DRAM acceleration methods outperforms the latestDRAM-based PIM accelerator for SpMV, achieving a performance increase of up to 7.54xand a 22.4ximprovement in energy efficiency in a wide range of SpMV tasks
引用
收藏
页码:176009 / 176021
页数:13
相关论文
共 50 条
  • [1] Acceleration of Sparse Matrix-Vector Multiplication by Region Traversal
    Simecek, I.
    ACTA POLYTECHNICA, 2008, 48 (04) : 8 - 15
  • [2] Adaptive sparse matrix representation for efficient matrix-vector multiplication
    Zardoshti, Pantea
    Khunjush, Farshad
    Sarbazi-Azad, Hamid
    JOURNAL OF SUPERCOMPUTING, 2016, 72 (09): : 3366 - 3386
  • [3] Processor-efficient sparse matrix-vector multiplication
    Heath, LS
    Ribbens, CJ
    Pemmaraju, SV
    COMPUTERS & MATHEMATICS WITH APPLICATIONS, 2004, 48 (3-4) : 589 - 608
  • [4] Sparse Matrix-Vector Multiplication on GPGPUs
    Filippone, Salvatore
    Cardellini, Valeria
    Barbieri, Davide
    Fanfarillo, Alessandro
    ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE, 2017, 43 (04):
  • [5] Efficient Sparse Matrix-Vector Multiplication on Intel PIUMA Architecture
    Aananthakrishnan, Sriram
    Pawlowski, Robert
    Fryman, Joshua
    Hur, Ibrahim
    2020 IEEE HIGH PERFORMANCE EXTREME COMPUTING CONFERENCE (HPEC), 2020,
  • [6] An efficient SIMD compression format for sparse matrix-vector multiplication
    Chen, Xinhai
    Xie, Peizhen
    Chi, Lihua
    Liu, Jie
    Gong, Chunye
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2018, 30 (23):
  • [7] Efficient FCM Computations Using Sparse Matrix-Vector Multiplication
    Puheim, Michal
    Vascak, Jan
    Machova, Kristina
    2016 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2016, : 4165 - 4170
  • [8] Efficient Multicore Sparse Matrix-Vector Multiplication for FE Electromagnetics
    Fernandez, David M.
    Giannacopoulos, Dennis
    Gross, Warren J.
    IEEE TRANSACTIONS ON MAGNETICS, 2009, 45 (03) : 1392 - 1395
  • [9] GPU accelerated sparse matrix-vector multiplication and sparse matrix-transpose vector multiplication
    Tao, Yuan
    Deng, Yangdong
    Mu, Shuai
    Zhang, Zhenzhong
    Zhu, Mingfa
    Xiao, Limin
    Ruan, Li
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2015, 27 (14): : 3771 - 3789
  • [10] MViD: Sparse Matrix-Vector Multiplication in Mobile DRAM for Accelerating Recurrent Neural Networks
    Kim, Byeongho
    Chung, Jongwook
    Lee, Eojin
    Jung, Wonkyung
    Lee, Sunjung
    Choi, Jaewan
    Park, Jaehyun
    Wi, Minbok
    Lee, Sukhan
    Ahn, Jung Ho
    IEEE TRANSACTIONS ON COMPUTERS, 2020, 69 (07) : 955 - 967