Accelerating Graph Neural Network Training on ReRAM-Based PIM Architectures via Graph and Model Pruning

被引:5
|
作者
Ogbogu, Chukwufumnanya O. [1 ]
Arka, Aqeeb Iqbal [1 ]
Pfromm, Lukas [2 ]
Joardar, Biresh Kumar [3 ]
Doppa, Janardhan Rao [1 ]
Chakrabarty, Krishnendu [4 ]
Pande, Partha Pratim [1 ]
机构
[1] Washington State Univ, Sch Elect Engn & Comp Sci, Pullman, WA 99164 USA
[2] Oregon State Univ, Dept Elect & Comp Engn, Corvallis, OR 97331 USA
[3] Univ Houston, Dept Elect & Comp Engn, Houston, TX 77004 USA
[4] Duke Univ, Dept Elect & Comp Engn, Durham, NC 27708 USA
基金
美国国家科学基金会;
关键词
Data compression; graph neural network (GNN); PIM; pruning; resistive random-access memory (ReRAM);
D O I
10.1109/TCAD.2022.3227879
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Graph neural networks (GNNs) are used for predictive analytics on graph-structured data, and they have become very popular in diverse real-world applications. Resistive random-access memory (ReRAM)-based PIM architectures can accelerate GNN training. However, GNN training on ReRAMbased architectures is both compute- and data intensive in nature. In this work, we propose a framework called SlimGNN that synergistically combines both graph and model pruning to accelerate GNN training on ReRAM-based architectures. The proposed framework reduces the amount of redundant information in both the GNN model and input graph(s) to streamline the overall training process. This enables fast and energy-efficient GNN training on ReRAM-based architectures. Experimental results demonstrate that using this framework, we can accelerate GNN training by up to 4.5x while using 6.6x less energy compared to the unpruned counterparts.
引用
收藏
页码:2703 / 2716
页数:14
相关论文
共 50 条
  • [1] Performance and Accuracy Tradeoffs for Training Graph Neural Networks on ReRAM-Based Architectures
    Arka, Aqeeb Iqbal
    Joardar, Biresh Kumar
    Doppa, Janardhan Rao
    Pande, Partha Pratim
    Chakrabarty, Krishnendu
    IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2021, 29 (10) : 1743 - 1756
  • [2] PIMGCN: A ReRAM-Based PIM Design for Graph Convolutional Network Acceleration
    Yang, Tao
    Li, Dongyue
    Han, Yibo
    Zhao, Yilong
    Liu, Fangxin
    Liang, Xiaoyao
    He, Zhezhi
    Jiang, Li
    2021 58TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2021, : 583 - 588
  • [3] Data Pruning-enabled High Performance and Reliable Graph Neural Network Training on ReRAM-based Processing-in-Memory Accelerators
    Ogbogu, Chukwufumnanya
    Joardar, Biresh
    Chakrabarty, Krishnendu
    Doppa, Jana
    Pande, Partha Pratim
    ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS, 2024, 29 (05)
  • [4] ReHy: A ReRAM-Based Digital/Analog Hybrid PIM Architecture for Accelerating CNN Training
    Jin, Hai
    Liu, Cong
    Liu, Haikun
    Luo, Ruikun
    Xu, Jiahong
    Mao, Fubing
    Liao, Xiaofei
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (11) : 2872 - 2884
  • [5] GraphIte: Accelerating Iterative Graph Algorithms on ReRAM Architectures via Approximate Computing
    Choudhury, Dwaipayan
    Kalyanaraman, Ananth
    Pande, Partha
    2023 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION, DATE, 2023,
  • [6] ReGNN: A ReRAM-based Heterogeneous Architecture for General Graph Neural Networks
    Liu, Cong
    Liu, Haikun
    Jin, Hai
    Liao, Xiaofei
    Zhang, Yu
    Duan, Zhuohui
    Xu, Jiahong
    Li, Huize
    PROCEEDINGS OF THE 59TH ACM/IEEE DESIGN AUTOMATION CONFERENCE, DAC 2022, 2022, : 469 - 474
  • [7] Accelerating parallel reduction and scan primitives on ReRAM-based architectures
    Jin Z.
    Duan Y.
    Yi E.
    Ji H.
    Liu W.
    Guofang Keji Daxue Xuebao/Journal of National University of Defense Technology, 2022, 44 (05): : 80 - 91
  • [8] GRAM: Graph Processing in a ReRAM-based Computational Memory
    Zhou, Minxuan
    Imani, Mohsen
    Gupta, Saransh
    Kim, Yeseong
    Rosing, Tajana
    24TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE (ASP-DAC 2019), 2019, : 591 - 596
  • [9] Training Sparse Graph Neural Networks via Pruning and Sprouting
    Ma, Xueqi
    Ma, Xingjun
    Erfani, Sarah
    Bailey, James
    PROCEEDINGS OF THE 2024 SIAM INTERNATIONAL CONFERENCE ON DATA MINING, SDM, 2024, : 136 - 144
  • [10] A Quantized Training Framework for Robust and Accurate ReRAM-based Neural Network Accelerators
    Zhang, Chenguang
    Zhou, Pingqiang
    2021 26TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE (ASP-DAC), 2021, : 43 - 48