SparGNN: Efficient Joint Feature-Model Sparsity Exploitation in Graph Neural Network Acceleration

被引:0
|
作者
Yin, Chen [1 ]
Jiang, Jianfei [1 ]
Wang, Qin [1 ]
Mao, Zhigang [1 ]
Jing, Naifeng [1 ]
机构
[1] Shanghai Jiao Tong Univ, Dept Micro Nano Elect, Shanghai, Peoples R China
基金
国家重点研发计划;
关键词
D O I
10.1109/ASP-DAC58780.2024.10473883
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the rapid explosion in both graph scale and model size, accelerating graph neural networks (GNNs) at scale encounters significant pressure on computation and memory footprint. Exploiting data sparsity with pruning, which exhibits remarkable effect in deep neural networks (DNNs), while still lags behind in GNN acceleration. This is because costly pruning overhead upon large graphs and inefficient hardware support will eclipse the benefit of GNN sparsification. To this end, this paper proposes SparGNN, an algorithm and accelerator co-design that can efficiently exploit data sparsity in both features and models to speedup GNN acceleration while reserving its accuracy. In algorithm, to reduce the overhead of iterative pruning, we distill a sparsified subgraph to substitute the original input graph for pruning, which can low-costly excavate the potential data sparsity in both features and models without accuracy compromise. In hardware, to improve data locality of the sparsified feature-weight multiplication, we design compressed row-/column-wise product dataflow for efficient feature updating. We then propose lightweight hardware changes to make our design applicable to conventional GNN accelerators. The experimental results show that compared to the state-of-the-art GNN accelerators, SparGNN reduces 1.5 similar to 4.3x computation and gains an average of 1.8 similar to 6.8x speedup with 1.4 similar to 9.2x energy efficiency improvement.
引用
收藏
页码:225 / 230
页数:6
相关论文
共 50 条
  • [1] Lassonet: A neural network with feature sparsity
    Lemhadri, Ismael
    Ruan, Feng
    Abraham, Louis
    Tibshirani, Robert
    Journal of Machine Learning Research, 2021, 22
  • [2] LassoNet: A Neural Network with Feature Sparsity
    Lemhadri, Ismael
    Ruan, Feng
    Abraham, Louis
    Tibshirani, Robert
    JOURNAL OF MACHINE LEARNING RESEARCH, 2021, 22
  • [3] DTS: dynamic training slimming with feature sparsity for efficient convolutional neural network
    Yin, Jia
    Wang, Wei
    Guo, Zhonghua
    Ji, Yangchun
    JOURNAL OF REAL-TIME IMAGE PROCESSING, 2024, 21 (04)
  • [4] Reusing d-DNNFs for Efficient Feature-Model Counting
    Sundermann, Chico
    Raab, Heiko
    Hess, Tobias
    Thuem, Thomas
    Schaefer, Ina
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2024, 33 (08)
  • [5] Deep Neural Network Acceleration Method Based on Sparsity
    He, Ming
    Zhao, Haiwu
    Wang, Guozhong
    Chen, Yu
    Zhu, Linlin
    Gao, Yuan
    DIGITAL TV AND MULTIMEDIA COMMUNICATION, 2019, 1009 : 133 - 145
  • [6] Efficient feature envy detection and refactoring based on graph neural network
    Yu, Dongjin
    Xu, Yihang
    Weng, Lehui
    Chen, Jie
    Chen, Xin
    Yang, Quanxin
    AUTOMATED SOFTWARE ENGINEERING, 2025, 32 (01)
  • [7] Joint Adaptive Graph and Structured Sparsity Regularization for Unsupervised Feature Selection
    Sun, Zhenzhen
    Yu, Yuanlong
    arXiv, 2020,
  • [8] Neural network model switching for efficient feature extraction
    Kameyama, K
    Kosugi, Y
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 1999, E82D (10): : 1372 - 1383
  • [9] Survey on Graph Neural Network Acceleration Architectures
    Li H.
    Yan M.
    Lü Z.
    Li W.
    Ye X.
    Fan D.
    Tang Z.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2021, 58 (06): : 1204 - 1229
  • [10] Structured feature sparsity training for convolutional neural network compression
    Wang, Wei
    Zhu, Liqiang
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2020, 71