GraphA: An efficient ReRAM-based architecture to accelerate large scale graph processing

被引:22
|
作者
Ghasemi, Seyed Ali [1 ]
Jahannia, Belal [1 ]
Farbeh, Hamed [1 ]
机构
[1] Amirkabir Univ Technol, Dept Comp Engn, Tehran, Iran
关键词
Graph processing; Non-volatile memory (NVM); Resistive random-access memory (ReRAM); Hardware acceleration; Processing -in -memory (PIM); PERFORMANCE; MEMORY; MODEL;
D O I
10.1016/j.sysarc.2022.102755
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Graph analytics is the basis for many modern applications, e.g., machine learning and streaming data problems. With an unprecedented increase in data size of many emerging domains such as the social networks, which generate a lot of images and documents, real-time big data processing is crucial. Graph processing on traditional computer architectures faces irregular memory accesses that lead to significant data movements and waste a large amount of energy and time. ReRAM-based Processing-in-memory (PIM) is a novel technology that addresses the memory wall problem. Additionally, it provides a high parallelism level and a significant reduction in energy consumption with negligible leakage power. In this paper, we propose a ReRAM-based PIM architecture, named GraphA, which includes a novel reordering algorithm and mapping data to ReRAM Graph Engines (RGE) that cause RGEs to be used with high utilization. Furthermore, we present a compressed format, a memory layout, and proper graph partitioning for graph traversal to eliminate extra communication and useless computation. Moreover, we investigate the computation patterns of graph processing to find a suitable preprocessing model for the proposed GraphA architecture based on reorganizing classified supernode graphs and offering a runtime execution that fits it. Evaluations of GraphA on various real-world graphs show an average performance enhancement and energy saving of 5.3x and 6.0x, respectively, compared with the state-of-the-art GraphR architecture.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] FPSA: A Full System Stack Solution for Reconfigurable ReRAM-based NN Accelerator Architecture
    Ji, Yu
    Zhang, Youyang
    Xie, Xinfeng
    Li, Shuangchen
    Wang, Peiqi
    Hu, Xing
    Zhang, Youhui
    Xie, Yuan
    TWENTY-FOURTH INTERNATIONAL CONFERENCE ON ARCHITECTURAL SUPPORT FOR PROGRAMMING LANGUAGES AND OPERATING SYSTEMS (ASPLOS XXIV), 2019, : 733 - 747
  • [32] A ReRAM-Based Row-Column-Oriented Memory Architecture for Convolutional Neural Networks
    Chen, Yan
    Zhang, Jing
    Xu, Yuebing
    Zhang, Yingjie
    Zhang, Renyuan
    Nakashima, Yasuhiko
    IEICE TRANSACTIONS ON ELECTRONICS, 2019, E102C (07) : 580 - 584
  • [33] ReSiPE: ReRAM-based Single-Spiking Processing-In-Memory Engine
    Li, Ziru
    Yan, Bonan
    Li, Hai ''Helen''
    PROCEEDINGS OF THE 2020 57TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2020,
  • [34] ReHarvest: An ADC Resource-Harvesting Crossbar Architecture for ReRAM-Based DNN Accelerators
    Xu, Jiahong
    Li, Haikun
    Duan, Zhuohui
    Liao, Xiaofei
    Jin, Hai
    Yang, Xiaokang
    Li, Huize
    Liu, Cong
    Mao, Fubing
    Zhang, Yu
    ACM TRANSACTIONS ON ARCHITECTURE AND CODE OPTIMIZATION, 2024, 21 (03)
  • [35] ReHy: A ReRAM-Based Digital/Analog Hybrid PIM Architecture for Accelerating CNN Training
    Jin, Hai
    Liu, Cong
    Liu, Haikun
    Luo, Ruikun
    Xu, Jiahong
    Mao, Fubing
    Liao, Xiaofei
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (11) : 2872 - 2884
  • [36] Lattice: An ADC/DAC-less ReRAM-based Processing-In-Memory Architecture for Accelerating Deep Convolution Neural Networks
    Zheng, Qilin
    Wang, Zongwei
    Feng, Zishun
    Yan, Bonan
    Cai, Yimao
    Huang, Ru
    Chen, Yiran
    Yang, Chia-Lin
    Li, Hai
    PROCEEDINGS OF THE 2020 57TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2020,
  • [37] ReRAM-based graph attention network with node-centric edge searching and hamming similarity
    Mao, Ruibin
    Sheng, Xia
    Graves, Catherine
    Xu, Cong
    Li, Can
    2023 60TH ACM/IEEE DESIGN AUTOMATION CONFERENCE, DAC, 2023,
  • [38] An efficient ReRAM-based inference accelerator for convolutional neural networks via activation reuse
    Chen, Yan
    Zhang, Jing
    Xu, Yuebing
    Zhang, Yingjie
    Zhang, Renyuan
    Nakashima, Yasuhiko
    IEICE ELECTRONICS EXPRESS, 2019, 16 (18) : 1 - 5
  • [39] Enabling Highly-Efficient DNA Sequence Mapping via ReRAM-based TCAM
    Lai, Yu-Shao
    Chen, Shuo-Han
    Chang, Yuan-Hao
    2023 IEEE/ACM INTERNATIONAL SYMPOSIUM ON LOW POWER ELECTRONICS AND DESIGN, ISLPED, 2023,
  • [40] Data Pruning-enabled High Performance and Reliable Graph Neural Network Training on ReRAM-based Processing-in-Memory Accelerators
    Ogbogu, Chukwufumnanya
    Joardar, Biresh
    Chakrabarty, Krishnendu
    Doppa, Jana
    Pande, Partha Pratim
    ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS, 2024, 29 (05)