A learnable sampling method for scalable graph neural networks

被引:4
|
作者
Zhao, Weichen [1 ]
Guo, Tiande [1 ,2 ]
Yu, Xiaoxi [3 ]
Han, Congying [1 ,2 ]
机构
[1] Univ Chinese Acad Sci UCAS, Beijing, Peoples R China
[2] Chinese Acad Sci, Key Lab Big Data Min & Knowledge Management, Beijing, Peoples R China
[3] MediaTek, Singapore, Singapore
基金
中国国家自然科学基金;
关键词
Graph neural networks; Large-scale data; Learnable sampling method;
D O I
10.1016/j.neunet.2023.03.015
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the development of graph neural networks, how to handle large-scale graph data has become an increasingly important topic. Currently, most graph neural network models which can be extended to large-scale graphs are based on random sampling methods. However, the sampling process in these models is detached from the forward propagation of neural networks. Moreover, quite a few works design sampling based on statistical estimation methods for graph convolutional networks and the weights of message passing in GCNs nodes are fixed, making these sampling methods not scalable to message passing networks with variable weights, such as graph attention networks. Noting the end-to-end learning capability of neural networks, we propose a learnable sampling method. It solves the problem that random sampling operations cannot calculate gradients and samples nodes with an unfixed probability. In this way, the sampling process is dynamically combined with the forward propagation process of the features, allowing for better training of the networks. And it can be generalized to all message passing models. In addition, we apply the learnable sampling method to GNNs and propose two models. Our method can be flexibly combined with different graph neural network models and achieves excellent accuracy on benchmark datasets with large graphs. Meanwhile, loss function converges to smaller values at a faster rate during training than past methods. (c) 2023 Elsevier Ltd. All rights reserved.
引用
收藏
页码:412 / 424
页数:13
相关论文
共 50 条
  • [1] Learnable Commutative Monoids for Graph Neural Networks
    Ong, Euan
    Velickovic, Petar
    [J]. LEARNING ON GRAPHS CONFERENCE, VOL 198, 2022, 198
  • [2] Scalable Spatiotemporal Graph Neural Networks
    Cini, Andrea
    Marisca, Ivan
    Bianchi, Filippo Maria
    Alippi, Cesare
    [J]. THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 6, 2023, : 7218 - 7226
  • [3] Scalable Graph Neural Networks with Deep Graph Library
    Zheng, Da
    Wang, Minjie
    Gan, Quan
    Song, Xiang
    Zhang, Zheng
    Karypis, Geroge
    [J]. WSDM '21: PROCEEDINGS OF THE 14TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, 2021, : 1141 - 1142
  • [4] Scalable Graph Neural Networks with Deep Graph Library
    Zheng, Da
    Wang, Minjie
    Gan, Quan
    Zhang, Zheng
    Karypis, Geroge
    [J]. KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 3521 - 3522
  • [5] LASGRec: A Personalized Recommender Based on Learnable Attribute Sampling and Graph Neural Network
    Wang, Yufeng
    Huang, Xun
    Ma, Jianhua
    Jin, Qun
    [J]. IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024, 11 (02) : 2930 - 2939
  • [6] MSLS: Meta-graph Search with Learnable Supernet for Heterogeneous Graph Neural Networks
    Wang, Yili
    Chen, Jiamin
    Li, Qiutong
    He, Changlong
    Gao, Jianliang
    [J]. 35TH INTERNATIONAL CONFERENCE ON SCIENTIFIC AND STATISTICAL DATABASE MANAGEMENT, SSDBM 2023, 2023,
  • [7] Graph Batch Coarsening framework for scalable graph neural networks
    Zhang, Shengzhong
    Zhang, Yimin
    Li, Bisheng
    Yang, Wenjie
    Zhou, Min
    Huang, Zengfeng
    [J]. Neural Networks, 2025, 183
  • [8] Deterministic sampling in heterogeneous graph neural networks
    Ansarizadeh, Fatemeh
    Tay, David B.
    Thiruvady, Dhananjay
    Robles-kelly, Antonio
    [J]. PATTERN RECOGNITION LETTERS, 2023, 172 : 74 - 81
  • [9] Evaluating graph neural networks under graph sampling scenarios
    Wei, Qiang
    Hu, Guangmin
    [J]. PEERJ COMPUTER SCIENCE, 2022, 8
  • [10] A Scalable AutoML Approach Based on Graph Neural Networks
    Helali, Mossad
    Mansour, Essam
    Abdelaziz, Ibrahim
    Dolby, Julian
    Srinivas, Kavitha
    [J]. PROCEEDINGS OF THE VLDB ENDOWMENT, 2022, 15 (11): : 2428 - 2436