GraphA: An efficient ReRAM-based architecture to accelerate large scale graph processing

被引:22
|
作者
Ghasemi, Seyed Ali [1 ]
Jahannia, Belal [1 ]
Farbeh, Hamed [1 ]
机构
[1] Amirkabir Univ Technol, Dept Comp Engn, Tehran, Iran
关键词
Graph processing; Non-volatile memory (NVM); Resistive random-access memory (ReRAM); Hardware acceleration; Processing -in -memory (PIM); PERFORMANCE; MEMORY; MODEL;
D O I
10.1016/j.sysarc.2022.102755
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Graph analytics is the basis for many modern applications, e.g., machine learning and streaming data problems. With an unprecedented increase in data size of many emerging domains such as the social networks, which generate a lot of images and documents, real-time big data processing is crucial. Graph processing on traditional computer architectures faces irregular memory accesses that lead to significant data movements and waste a large amount of energy and time. ReRAM-based Processing-in-memory (PIM) is a novel technology that addresses the memory wall problem. Additionally, it provides a high parallelism level and a significant reduction in energy consumption with negligible leakage power. In this paper, we propose a ReRAM-based PIM architecture, named GraphA, which includes a novel reordering algorithm and mapping data to ReRAM Graph Engines (RGE) that cause RGEs to be used with high utilization. Furthermore, we present a compressed format, a memory layout, and proper graph partitioning for graph traversal to eliminate extra communication and useless computation. Moreover, we investigate the computation patterns of graph processing to find a suitable preprocessing model for the proposed GraphA architecture based on reorganizing classified supernode graphs and offering a runtime execution that fits it. Evaluations of GraphA on various real-world graphs show an average performance enhancement and energy saving of 5.3x and 6.0x, respectively, compared with the state-of-the-art GraphR architecture.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] Magma: A Monolithic 3D Vertical Heterogeneous ReRAM-based Main Memory Architecture
    Zokaee, Farzaneh
    Zhang, Mingzhe
    Ye, Xiaochun
    Fan, Dongrui
    Jiang, Lei
    PROCEEDINGS OF THE 2019 56TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2019,
  • [42] Exploring Bit -Slice Sparsity in Deep Neural Networks for Efficient ReRAM-Based Deployment
    Zhang, Jingyang
    Yang, Huanrui
    Chen, Fan
    Wang, Yitu
    Li, Hai
    FIFTH WORKSHOP ON ENERGY EFFICIENT MACHINE LEARNING AND COGNITIVE COMPUTING - NEURIPS EDITION (EMC2-NIPS 2019), 2019, : 1 - 5
  • [43] An Energy-Efficient Inference Engine for a Configurable ReRAM-Based Neural Network Accelerator
    Zheng, Yang-Lin
    Yang, Wei-Yi
    Chen, Ya-Shu
    Han, Ding-Hung
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2023, 42 (03) : 740 - 753
  • [44] Efficient processing techniques for very large-scale graph structure
    1600, Institute of Electronics Information Communication Engineers (97):
  • [45] Efficient Distributed Query Processing on Large Scale RDF Graph Data
    Wang X.
    Xu Q.
    Chai L.-L.
    Yang Y.-J.
    Chai Y.-P.
    Ruan Jian Xue Bao/Journal of Software, 2019, 30 (03): : 498 - 514
  • [46] Runtime Row/Column Activation Pruning for ReRAM-based Processing-in-Memory DNN Accelerators
    Jiang, Xikun
    Shen, Zhaoyan
    Sun, Siqing
    Yin, Ping
    Jia, Zhiping
    Ju, Lei
    Zhang, Zhiyong
    Yu, Dongxiao
    2023 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER AIDED DESIGN, ICCAD, 2023,
  • [47] This is SPATEM! A Spatial-Temporal Optimization Framework for Efficient Inference on ReRAM-based CNN Accelerator
    Tsou, Yen-Ting
    Chen, Kuan-Hsun
    Yang, Chia-Lin
    Cheng, Hsiang-Yun
    Chen, Jian-Jia
    Tsai, Der-Yu
    27TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE, ASP-DAC 2022, 2022, : 702 - 707
  • [48] A Novel ReRAM-Based Architecture of Field Sequential Color Driver for High-Resolution LCoS Displays
    Han, Youngsun
    Kim, Dongmin
    Kim, Yongtae
    IEEE ACCESS, 2020, 8 : 223385 - 223395
  • [49] Energy-Efficient ReRAM-based ML Training via Mixed Pruning and Reconfigurable ADC
    Ogbogu, Chukwufumnanya
    Soumen, Mohapatra
    Joardar, Biresh Kumar
    Doppa, Janardhan Rao
    Heo, Deuk
    Chakrabarty, Krishnendu
    Pande, Partha Pratim
    2023 IEEE/ACM INTERNATIONAL SYMPOSIUM ON LOW POWER ELECTRONICS AND DESIGN, ISLPED, 2023,
  • [50] Efficient Large Graph Processing with Chunk-Based Graph Representation Model
    Wang, Rui
    Zong, Weixu
    He, Shuibing
    Chen, Xinyu
    Li, Zhenxin
    Dang, Zheng
    PROCEEDINGS OF THE 2024 USENIX ANNUAL TECHNICAL CONFERENCE, ATC 2024, 2024, : 1239 - 1255