Sampling Methods for Efficient Training of Graph Convolutional Networks: A Survey

被引:2
|
作者
Xin Liu [1 ,2 ]
Mingyu Yan [1 ]
Lei Deng [3 ,4 ]
Guoqi Li [3 ,4 ]
Xiaochun Ye [1 ]
Dongrui Fan [3 ,1 ,2 ]
机构
[1] the State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences
[2] the School of Computer Science and Technology, University of Chinese Academy of Sciences
[3] IEEE
[4] the Department of Precision Instrument,Center for Brain Inspired Computing Research, Tsinghua University
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP183 [人工神经网络与计算];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph convolutional networks(GCNs) have received significant attention from various research fields due to the excellent performance in learning graph representations.Although GCN performs well compared with other methods, it still faces challenges. Training a GCN model for large-scale graphs in a conventional way requires high computation and storage costs. Therefore, motivated by an urgent need in terms of efficiency and scalability in training GCN, sampling methods have been proposed and achieved a significant effect. In this paper, we categorize sampling methods based on the sampling mechanisms and provide a comprehensive survey of sampling methods for efficient training of GCN. To highlight the characteristics and differences of sampling methods, we present a detailed comparison within each category and further give an overall comparative analysis for the sampling methods in all categories. Finally, we discuss some challenges and future research directions of the sampling methods.
引用
收藏
页码:205 / 234
页数:30
相关论文
共 50 条
  • [1] Sampling Methods for Efficient Training of Graph Convolutional Networks: A Survey
    Liu, Xin
    Yan, Mingyu
    Deng, Lei
    Li, Guoqi
    Ye, Xiaochun
    Fan, Dongrui
    [J]. IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2022, 9 (02) : 205 - 234
  • [2] Generalization Guarantee of Training Graph Convolutional Networks with Graph Topology Sampling
    Li, Hongkang
    Wang, Meng
    Liu, Sijia
    Chen, Pin-Yu
    Xiong, Jinjun
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [3] Resource-Efficient Training for Large Graph Convolutional Networks with Label-Centric Cumulative Sampling
    Lin, Mingkai
    Li, Wenzhong
    Li, Ding
    Chen, Yizhou
    Lu, Sanglu
    [J]. PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, : 1170 - 1180
  • [4] PolicyClusterGCN: Identifying Efficient Clusters for Training Graph Convolutional Networks
    Gurukar, Saket
    Venkatakrishnan, Shaileshh Bojja
    Ravindran, Balaraman
    Parthasarathy, Srinivasan
    [J]. PROCEEDINGS OF THE 2023 IEEE/ACM INTERNATIONAL CONFERENCE ON ADVANCES IN SOCIAL NETWORKS ANALYSIS AND MINING, ASONAM 2023, 2023, : 245 - 252
  • [5] Enhancing graph convolutional networks with progressive granular ball sampling fusion: A novel approach to efficient and accurate GCN training
    Cong, Hui
    Sun, Qiguo
    Yang, Xibei
    Liu, Keyu
    Qian, Yuhua
    [J]. INFORMATION SCIENCES, 2024, 676
  • [6] Distributed Training of Graph Convolutional Networks
    Scardapane, Simone
    Spinelli, Indro
    Di Lorenzo, Paolo
    [J]. IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, 2021, 7 : 87 - 100
  • [7] Layer-Dependent Importance Sampling for Training Deep and Large Graph Convolutional Networks
    Zou, Difan
    Hu, Ziniu
    Wang, Yewen
    Jiang, Song
    Sun, Yizhou
    Gu, Quanquan
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [8] Robust graph convolutional networks with directional graph adversarial training
    Hu, Weibo
    Chen, Chuan
    Chang, Yaomin
    Zheng, Zibin
    Du, Yunfei
    [J]. APPLIED INTELLIGENCE, 2021, 51 (11) : 7812 - 7826
  • [9] Robust graph convolutional networks with directional graph adversarial training
    Weibo Hu
    Chuan Chen
    Yaomin Chang
    Zibin Zheng
    Yunfei Du
    [J]. Applied Intelligence, 2021, 51 : 7812 - 7826
  • [10] Graph convolutional networks in language and vision: A survey
    Ren, Haotian
    Lu, Wei
    Xiao, Yun
    Chang, Xiaojun
    Wang, Xuanhong
    Dong, Zhiqiang
    Fang, Dingyi
    [J]. KNOWLEDGE-BASED SYSTEMS, 2022, 251