Accelerating Graph Neural Network Training on ReRAM-Based PIM Architectures via Graph and Model Pruning

被引:5
|
作者
Ogbogu, Chukwufumnanya O. [1 ]
Arka, Aqeeb Iqbal [1 ]
Pfromm, Lukas [2 ]
Joardar, Biresh Kumar [3 ]
Doppa, Janardhan Rao [1 ]
Chakrabarty, Krishnendu [4 ]
Pande, Partha Pratim [1 ]
机构
[1] Washington State Univ, Sch Elect Engn & Comp Sci, Pullman, WA 99164 USA
[2] Oregon State Univ, Dept Elect & Comp Engn, Corvallis, OR 97331 USA
[3] Univ Houston, Dept Elect & Comp Engn, Houston, TX 77004 USA
[4] Duke Univ, Dept Elect & Comp Engn, Durham, NC 27708 USA
基金
美国国家科学基金会;
关键词
Data compression; graph neural network (GNN); PIM; pruning; resistive random-access memory (ReRAM);
D O I
10.1109/TCAD.2022.3227879
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Graph neural networks (GNNs) are used for predictive analytics on graph-structured data, and they have become very popular in diverse real-world applications. Resistive random-access memory (ReRAM)-based PIM architectures can accelerate GNN training. However, GNN training on ReRAMbased architectures is both compute- and data intensive in nature. In this work, we propose a framework called SlimGNN that synergistically combines both graph and model pruning to accelerate GNN training on ReRAM-based architectures. The proposed framework reduces the amount of redundant information in both the GNN model and input graph(s) to streamline the overall training process. This enables fast and energy-efficient GNN training on ReRAM-based architectures. Experimental results demonstrate that using this framework, we can accelerate GNN training by up to 4.5x while using 6.6x less energy compared to the unpruned counterparts.
引用
收藏
页码:2703 / 2716
页数:14
相关论文
共 50 条
  • [41] A Tennis Training Action Analysis Model Based on Graph Convolutional Neural Network
    Zhang, Xinyu
    Chen, Jihua
    IEEE ACCESS, 2023, 11 : 113264 - 113271
  • [42] Accelerating aerodynamic design optimization based on graph convolutional neural network
    Li, Tiejun
    Yan, Junjun
    Chen, Xinhai
    Wang, Zhichao
    Zhang, Qingyang
    Zhou, Enqiang
    Gong, Chunye
    Liu, Jie
    INTERNATIONAL JOURNAL OF MODERN PHYSICS C, 2024, 35 (01):
  • [43] A microstructure-based graph neural network for accelerating multiscale simulations
    Storm, J.
    Rocha, I. B. C. M.
    Meer, F. P. van der
    COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING, 2024, 427
  • [44] Trained Biased Number Representation for ReRAM-Based Neural Network Accelerators
    Wang, Weijia
    Lin, Bill
    ACM JOURNAL ON EMERGING TECHNOLOGIES IN COMPUTING SYSTEMS, 2019, 15 (02)
  • [45] The Graph Neural Network Model
    Scarselli, Franco
    Gori, Marco
    Tsoi, Ah Chung
    Hagenbuchner, Markus
    Monfardini, Gabriele
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2009, 20 (01): : 61 - 80
  • [46] Learning the Sparsity for ReRAM: Mapping and Pruning Sparse Neural Network for ReRAM based Accelerator
    Lin, Jilan
    Zhu, Zhenhua
    Wang, Yu
    Xie, Yuan
    24TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE (ASP-DAC 2019), 2019, : 639 - 644
  • [47] Accelerating network layouts using graph neural networks
    Both, Csaba
    Dehmamy, Nima
    Yu, Rose
    Barabasi, Albert-Laszlo
    NATURE COMMUNICATIONS, 2023, 14 (01)
  • [48] Accelerating Virtual Network Embedding with Graph Neural Networks
    Habibi, Farzad
    Dolati, Mahdi
    Khonsari, Ahmad
    Ghaderi, Majid
    2020 16TH INTERNATIONAL CONFERENCE ON NETWORK AND SERVICE MANAGEMENT (CNSM), 2020,
  • [49] Accelerating network layouts using graph neural networks
    Csaba Both
    Nima Dehmamy
    Rose Yu
    Albert-László Barabási
    Nature Communications, 14
  • [50] Processing-in-memory (PIM)-based Manycore Architecture for Training Graph Neural Networks
    Pande, Partha P.
    2023 INTERNATIONAL VLSI SYMPOSIUM ON TECHNOLOGY, SYSTEMS AND APPLICATIONS, VLSI-TSA/VLSI-DAT, 2023,