MSIM : A Highly Parallel Near-Memory Accelerator for MinHash Sketch

被引:0
|
作者
Sinha, Aman [1 ]
Mai, Jhih-Yong [1 ]
Lai, Bo-Cheng [1 ]
机构
[1] Natl Yang Ming Chiao Tung Univ, Inst Elect, Hsinchu, Taiwan
关键词
Processing-In-Memory; Near Memory Processing; MinHash Sketches; Long read genome assembly;
D O I
10.1109/SOCC56010.2022.9908115
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Genome Assembly is an important Big Data analytics which involves massive computations for similarity searches on sequence databases. Being major component of runtime, similarity searches require careful design for scalable performance. MinHash Sketching is an extensively used data structure in Long-read genome assembly pipelines, which involves generating, randomizing and minimizing a set of hashes for all the k-mers in genome sequences. Compute-hungry MinHash sketch processing on commercially available multi-threaded CPUs suffer from the limited bandwidth of the L1-cache, which causes the CPUs to stall. Near-Data Processing (NDP) is an emerging trend in data-bound Big Data analytics to harness the low-latency, high-bandwidth available within the Dual In-line Memory Modules (DIMMs). While NDP architectures have generally been utilized for memory-bound computations, MinHash sketching is a potential application that can gain massive throughput by exploiting memory Banks as higher bandwidth L1-cache. In this work, we propose MSIM, a distributed, highly parallel and efficient hardware-software co-design for accelerating MinHash Sketch processing on light-weight components placed on the DRAM hierarchy. Multiple ASIC-based Processing Engines (PEs) placed at the bank-group-level in MSIM provide high-parallelism for low-latency computations. The PEs sequentially access data from all Banks within their bank-group with the help of a dedicated Address calculator, which utilizes an optimal data mapping scheme. The PEs are controlled by a custom Arbiter, which is directly activated by the host CPU using general DDR commands, without requiring any modification to the memory controller or the DIMM standard buses. MSIM requires limited area and power overheads, while displaying up-to 384.9x speedup and 1088.4x energy reduction compared to the baseline multi-threaded software solution in our experiments. MSIM achieves 4.26x speedup over high-end GPU, while consuming 26.4x lesser energy. Moreover, MSIM design is highly scalable and extendable in nature.
引用
收藏
页码:184 / 189
页数:6
相关论文
共 50 条
  • [1] A Near-Memory Radix Sort Accelerator with Parallel 1-bit Sorter
    Cho, Jihwan
    Maulana, Dalta Imam
    Jung, Wanyeong
    2022 IEEE 30TH INTERNATIONAL SYMPOSIUM ON FIELD-PROGRAMMABLE CUSTOM COMPUTING MACHINES (FCCM 2022), 2022, : 238 - 238
  • [2] PLANAR: A Programmable Accelerator for Near-Memory Data Rearrangement
    Barredo, Adrian
    Armejach, Adria
    Beard, Jonathan C.
    Moreto, Miquel
    PROCEEDINGS OF THE 2021 ACM INTERNATIONAL CONFERENCE ON SUPERCOMPUTING, ICS 2021, 2021, : 164 - 176
  • [3] Near-Memory Parallel Indexing and Coalescing: Enabling Highly Efficient Indirect Access for SpMV
    Zhang, Chi
    Scheffler, Paul
    Benz, Thomas
    Perotti, Matteo
    Benini, Luca
    2024 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION, DATE, 2024,
  • [4] CLU: A Near-Memory Accelerator Exploiting the Parallelism in Convolutional Neural Networks
    Das, Palash
    Kapoor, Hemangee K.
    ACM JOURNAL ON EMERGING TECHNOLOGIES IN COMPUTING SYSTEMS, 2021, 17 (02)
  • [5] NARMADA: Near-memory horizontal diffusion accelerator for scalable stencil computations
    Singh, Gagandeep
    Diamantopoulos, Dionysios
    Hagleitner, Christoph
    Stuijk, Sander
    Corporaal, Henk
    2019 29TH INTERNATIONAL CONFERENCE ON FIELD-PROGRAMMABLE LOGIC AND APPLICATIONS (FPL), 2019, : 263 - 269
  • [6] NEMO-CNN: An Efficient Near-Memory Accelerator for Convolutional Neural Networks
    Brown, Grant
    Tenace, Valerio
    Gaillardon, Pierre-Emmanuel
    2021 IEEE 32ND INTERNATIONAL CONFERENCE ON APPLICATION-SPECIFIC SYSTEMS, ARCHITECTURES AND PROCESSORS (ASAP 2021), 2021, : 57 - 60
  • [7] Near-Memory Data Services
    Falsafi, Babak
    IEEE MICRO, 2016, 36 (01) : 6 - 7
  • [8] Near-Memory Address Translation
    Picorel, Javier
    Jevdjic, Djordje
    Falsafi, Babak
    2017 26TH INTERNATIONAL CONFERENCE ON PARALLEL ARCHITECTURES AND COMPILATION TECHNIQUES (PACT), 2017, : 303 - 317
  • [9] FeFETs for Near-Memory and In-Memory Compute
    Salahuddin, Saveef
    Tan, Ava
    Cheema, Suraj
    Shanker, Nirmaan
    Hoffmann, Michael
    Bae, J-H
    2021 IEEE INTERNATIONAL ELECTRON DEVICES MEETING (IEDM), 2021,
  • [10] Near-Memory Acceleration for Radio Astronomy
    Fiorin, Leandro
    Jongerius, Rik
    Vermij, Erik
    van Lunteren, Jan
    Hagleitner, Christoph
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2018, 29 (01) : 115 - 128