SPARSIFYING THE UPDATE STEP IN GRAPH NEURAL NETWORKS

被引:0
|
作者
Lutzeyer, Johannes F. [1 ]
Wu, Changmin [1 ]
Vazirgiannis, Michalis [1 ]
机构
[1] Ecole Polytech, Inst Polytech Paris, Lab Informat, DaSciM, Paris, France
关键词
EXPANDER GRAPHS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Message-Passing Neural Networks (MPNNs), the most prominent Graph Neural Network (GNN) framework, celebrate much success in the analysis of graph-structured data. Concurrently, the sparsification of Neural Network models attracts a great amount of academic and industrial interest. In this paper we conduct a structured, empirical study of the effect of sparsification on the trainable part of MPNNs known as the Update step. To this end, we design a series of models to successively sparsify the linear transform in the Update step. Specifically, we propose the ExpanderGNN model with a tuneable sparsification rate and the Activation-Only GNN, which has no linear transform in the Update step. In agreement with a growing trend in the literature the sparsification paradigm is changed by initialising sparse neural network architectures rather than expensively sparsifying already trained architectures. Our novel benchmark models enable a better understanding of the influence of the Update step on model performance and outperform existing simplified benchmark models such as the Simple Graph Convolution. The ExpanderGNNs, and in some cases the Activation-Only models, achieve performance on par with their vanilla counterparts on several downstream tasks, while containing significantly fewer trainable parameters. Our code is publicly available at: https://github.com/ChangminWu/ExpanderGNN.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] Introduction to Graph Neural Networks
    Liu Z.
    Zhou J.
    1600, Morgan and Claypool Publishers (14): : 1 - 127
  • [32] Neural networks and graph theory
    Xu, J
    Bao, Z
    SCIENCE IN CHINA SERIES F-INFORMATION SCIENCES, 2002, 45 (01): : 1 - 24
  • [33] Graph Neural Networks in TensorFlow
    Perozzi, Bryan
    Abu-El-Haija, Sami
    Tsitsulin, Anton
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 5786 - 5787
  • [34] Neural networks and graph theory
    许进
    保铮
    Science China(Information Sciences), 2002, (01) : 1 - 24
  • [35] Hyperbolic Graph Neural Networks
    Liu, Qi
    Nickel, Maximilian
    Kiela, Douwe
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [36] Neural networks and graph theory
    Jin Xu
    Zheng Bao
    Science in China Series F: Information Sciences, 2002, 45 (1): : 1 - 24
  • [37] Orthogonal Graph Neural Networks
    Guo, Kai
    Zhou, Kaixiong
    Hu, Xia
    Li, Yu
    Chang, Yi
    Wang, Xin
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 3996 - 4004
  • [38] Clenshaw Graph Neural Networks
    Guo, Yuhe
    Wei, Zhewei
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 614 - 625
  • [39] AGGREGATION GRAPH NEURAL NETWORKS
    Gama, Fernando
    Marques, Antonio G.
    Ribeiro, Alejandro
    Leus, Geert
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 4943 - 4947
  • [40] Schatten Graph Neural Networks
    Liu, Youfa
    Chen, Yongyong
    Chen, Guo
    Zhang, Jiawei
    IEEE ACCESS, 2022, 10 : 56482 - 56492