Sparse Deep Neural Network Graph Challenge

被引:32
|
作者
Kepner, Jeremy [1 ,2 ,3 ]
Alford, Simon [2 ]
Gadepally, Vijay [1 ,2 ]
Jones, Michael [1 ]
Milechin, Lauren [4 ]
Robinett, Ryan [3 ]
Samsi, Sid [1 ]
机构
[1] MIT Lincoln Lab, Supercomp Ctr, Lexington, MA 02421 USA
[2] MIT Comp Sci & AI Lab, Cambridge, MA 02139 USA
[3] MIT Math Dept, Cambridge, MA 02142 USA
[4] MIT Dept Earth Atmospher & Planetary Sci, Cambridge, MA USA
关键词
D O I
10.1109/hpec.2019.8916336
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The MIT/IEEE/Amazon GraphChallenge.org encourages community approaches to developing new solutions for analyzing graphs and sparse data. Sparse AI analytics present unique scalability difficulties. The proposed Sparse Deep Neural Network (DNN) Challenge draws upon prior challenges from machine learning, high performance computing, and visual analytics to create a challenge that is reflective of emerging sparse AI systems. The Sparse DNN Challenge is based on a mathematically well-defined DNN inference computation and can be implemented in any programming environment. Sparse DNN inference is amenable to both vertex-centric implementations and array-based implementations (e.g., using the GraphBLAS.org standard). The computations are simple enough that performance predictions can be made based on simple computing hardware models. The input data sets are derived from the MNIST handwritten letters. The surrounding I/O and verification provide the context for each sparse DNN inference that allows rigorous definition of both the input and the output. Furthermore, since the proposed sparse DNN challenge is scalable in both problem size and hardware, it can be used to measure and quantitatively compare a wide range of present day and future systems. Reference implementations have been implemented and their serial and parallel performance have been measured. Specifications, data, and software are publicly available at GraphChallenge.org.
引用
收藏
页数:7
相关论文
共 50 条
  • [1] A GPU Implementation of the Sparse Deep Neural Network Graph Challenge
    Bisson, Mauro
    Fatica, Massimiliano
    [J]. 2019 IEEE HIGH PERFORMANCE EXTREME COMPUTING CONFERENCE (HPEC), 2019,
  • [2] Graph Neural Network Meets Sparse Representation: Graph Sparse Neural Networks via Exclusive Group Lasso
    Jiang, Bo
    Wang, Beibei
    Chen, Si
    Tang, Jin
    Luo, Bin
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (10) : 12692 - 12698
  • [3] A unified deep sparse graph attention network for scene graph generation
    Zhou, Hao
    Yang, Yazhou
    Luo, Tingjin
    Zhang, Jun
    Li, Shuohao
    [J]. PATTERN RECOGNITION, 2022, 123
  • [4] Sparse Graph Neural Networks with Scikit-Network
    Delarue, Simon
    Bonald, Thomas
    [J]. COMPLEX NETWORKS & THEIR APPLICATIONS XII, VOL 1, COMPLEX NETWORKS 2023, 2024, 1141 : 16 - 24
  • [5] Sparse Deep Neural Network Exact Solutions
    Kepner, Jeremy
    Gadepally, Vijay
    Jananthan, Hayden
    Milechin, Lauren
    Samsi, Sid
    [J]. 2018 IEEE HIGH PERFORMANCE EXTREME COMPUTING CONFERENCE (HPEC), 2018,
  • [6] A deep graph convolutional neural network architecture for graph classification
    Zhou, Yuchen
    Huo, Hongtao
    Hou, Zhiwen
    Bu, Fanliang
    [J]. PLOS ONE, 2023, 18 (03):
  • [7] A deep graph convolutional neural network architecture for graph classification
    Zhou, Yuchen
    Huo, Hongtao
    Hou, Zhiwen
    Bu, Fanliang
    [J]. PLOS BIOLOGY, 2023, 21 (03)
  • [8] Learning to match features with discriminative sparse graph neural network
    Shi, Yan
    Cai, Jun-Xiong
    Fan, Mingyu
    Feng, Wensen
    Zhang, Kai
    [J]. PATTERN RECOGNITION, 2024, 156
  • [9] Deep Neural Network Acceleration With Sparse Prediction Layers
    Yao, Zhongtian
    Huang, Kejie
    Shen, Haibin
    Ming, Zhaoyan
    [J]. IEEE ACCESS, 2020, 8 : 6839 - 6848
  • [10] Sparse Deep Neural Network Optimization for Embedded Intelligence
    Bi, Jia
    Gunn, Steve R.
    [J]. INTERNATIONAL JOURNAL ON ARTIFICIAL INTELLIGENCE TOOLS, 2020, 29 (3-4)