Discovering the Representation Bottleneck of Graph Neural Networks

被引:1
|
作者
Wu, Fang [1 ]
Li, Siyuan [1 ]
Li, Stan Z. [2 ]
机构
[1] Westlake Univ, Sch Engn, Hangzhou 310024, Peoples R China
[2] Westlake Univ, Fac Sch Engn, Hangzhou 310024, Peoples R China
关键词
Complexity theory; Task analysis; Encoding; Knowledge engineering; Nearest neighbor methods; Input variables; Graph neural networks; Graph neural network; representation bottleneck; graph rewiring; AI for science; PRIVACY;
D O I
10.1109/TKDE.2024.3446584
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph neural networks (GNNs) rely mainly on the message-passing paradigm to propagate node features and build interactions, and different graph learning problems require different ranges of node interactions. In this work, we explore the capacity of GNNs to capture node interactions under contexts of different complexities. We discover that GNNs usually fail to capture the most informative kinds of interaction styles for diverse graph learning tasks, and thus name this phenomenon as GNNs' representation bottleneck. As a response, we demonstrate that the inductive bias introduced by existing graph construction mechanisms can result in this representation bottleneck, i.e., preventing GNNs from learning interactions of the most appropriate complexity. To address that limitation, we propose a novel graph rewiring approach based on interaction patterns learned by GNNs to adjust each node's receptive fields dynamically. Extensive experiments on both real-world and synthetic datasets prove the effectiveness of our algorithm in alleviating the representation bottleneck and its superiority in enhancing the performance of GNNs over state-of-the-art graph rewiring baselines.
引用
收藏
页码:7998 / 8008
页数:11
相关论文
共 50 条
  • [31] Learning Effective Road Network Representation with Hierarchical Graph Neural Networks
    Wu, Ning
    Zhao, Wayne Xin
    Wang, Jingyuan
    Pan, Dayan
    KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 6 - 14
  • [32] Graph neural networks in TensorFlow-Keras with RaggedTensor representation (kgcnn)
    Reiser, Patrick
    Eberhard, Andre
    Friederich, Pascal
    SOFTWARE IMPACTS, 2021, 9
  • [33] Molecular Representation Learning via Heterogeneous Motif Graph Neural Networks
    Yu, Zhaoning
    Gao, Hongyang
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [34] Rethinking Higher-order Representation Learning with Graph Neural Networks
    Xu, Tuo
    Zou, Lei
    LEARNING ON GRAPHS CONFERENCE, VOL 231, 2023, 231
  • [35] Graph Neural Networks and Representation Embedding for Table Extraction in PDF Documents
    Gemelli, Andrea
    Vivoli, Emanuele
    Marinai, Simone
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 1719 - 1726
  • [36] Quantifying the reproducibility of graph neural networks using multigraph data representation
    Nebli, Ahmed
    Gharsallaoui, Mohammed Amine
    Gurler, Zeynep
    Rekik, Islem
    Alzheimers Dis Neuroimaging Initiative
    NEURAL NETWORKS, 2022, 148 : 254 - 265
  • [37] Hierarchical Representation Learning in Graph Neural Networks With Node Decimation Pooling
    Bianchi, Filippo Maria
    Grattarola, Daniele
    Livi, Lorenzo
    Alippi, Cesare
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (05) : 2195 - 2207
  • [38] Toward Enhanced Robustness in Unsupervised Graph Representation Learning: A Graph Information Bottleneck Perspective
    Wang, Jihong
    Luo, Minnan
    Li, Jundong
    Liu, Ziqi
    Zhou, Jun
    Zheng, Qinghua
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (08) : 4290 - 4303
  • [39] NEURAL DISTILLATION AS A STATE REPRESENTATION BOTTLENECK IN REINFORCEMENT LEARNING
    Guillet, Valentin
    Wilson, Dennis
    Aguilar-Melchor, Carlos
    Rachelson, Emmanuel
    CONFERENCE ON LIFELONG LEARNING AGENTS, VOL 199, 2022, 199
  • [40] A graph representation of filter networks
    Svensson, B
    Andersson, M
    Knutsson, H
    IMAGE ANALYSIS, PROCEEDINGS, 2005, 3540 : 1086 - 1095