Stochastic Training of Graph Convolutional Networks with Variance Reduction

被引:0
|
作者
Chen, Jianfei [1 ]
Zhu, Jun [1 ]
Song, Le [2 ,3 ]
机构
[1] Tsinghua Univ, THBI Lab, State Key Lab Intell Tech & Sys, Dept Comp Sci & Tech,BNRist Ctr, Beijing, Peoples R China
[2] Georgia Inst Technol, Atlanta, GA 30332 USA
[3] Ant Financial, Hangzhou, Peoples R China
来源
INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80 | 2018年 / 80卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph convolutional networks (GCNs) are powerful deep neural networks for graph-structured data. However, GCN computes the representation of a node recursively from its neighbors, making the receptive field size grow exponentially with the number of layers. Previous attempts on reducing the receptive field size by subsampling neighbors do not have convergence guarantee, and their receptive field size per node is still in the order of hundreds. In this paper, we develop control variate based algorithms with new theoretical guarantee to converge to a local optimum of GCN regardless of the neighbor sampling size. Empirical results show that our algorithms enjoy similar convergence rate and model quality with the exact algorithm using only two neighbors per node. The running time of our algorithms on a large Reddit dataset is only one seventh of previous neighbor sampling algorithms.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] Graph sparsification with graph convolutional networks
    Jiayu Li
    Tianyun Zhang
    Hao Tian
    Shengmin Jin
    Makan Fardad
    Reza Zafarani
    International Journal of Data Science and Analytics, 2022, 13 : 33 - 46
  • [22] Rank-based self-training for graph convolutional networks
    Guimaraes Pedronette, Daniel Carlos
    Latecki, Longin Jan
    INFORMATION PROCESSING & MANAGEMENT, 2021, 58 (02)
  • [23] FairSample: Training Fair and Accurate Graph Convolutional Neural Networks Efficiently
    Cong, Zicun
    Shi, Baoxu
    Li, Shan
    Yang, Jaewon
    He, Qi
    Pei, Jian
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (04) : 1537 - 1551
  • [24] GRAPH CONVOLUTIONAL NETWORKS & ADVERSARIAL TRAINING FOR JOINT EXTRACTION OF ENTITY AND RELATION
    Qu, Xiaolong
    Zhang, Yang
    Tian, Ziwei
    LI, Yuxun
    LI, Dongmei
    Zhang, Xiaoping
    UNIVERSITY POLITEHNICA OF BUCHAREST SCIENTIFIC BULLETIN SERIES C-ELECTRICAL ENGINEERING AND COMPUTER SCIENCE, 2023, 85 (03): : 213 - 224
  • [25] Entropy-aware self-training for graph convolutional networks
    Zhao, Gongpei
    Wang, Tao
    Li, Yidong
    Jin, Yi
    Lang, Congyan
    NEUROCOMPUTING, 2021, 464 : 394 - 407
  • [26] Training Matters: Unlocking Potentials of Deeper Graph Convolutional Neural Networks
    Luan, Sitao
    Zhao, Mingde
    Chang, Xiao-Wen
    Precup, Doina
    COMPLEX NETWORKS & THEIR APPLICATIONS XII, VOL 1, COMPLEX NETWORKS 2023, 2024, 1141 : 49 - 60
  • [27] GRAPH CONVOLUTIONAL NETWORKS & ADVERSARIAL TRAINING FOR JOINT EXTRACTION OF ENTITY AND RELATION
    Qu, Xiaolong
    Zhang, Yang
    Tian, Ziwei
    Li, Yuxun
    Li, Dongmei
    Zhang, Xiaoping
    UPB Scientific Bulletin, Series C: Electrical Engineering and Computer Science, 2023, 85 (03): : 213 - 224
  • [28] GIST: distributed training for large-scale graph convolutional networks
    Wolfe C.R.
    Yang J.
    Liao F.
    Chowdhury A.
    Dun C.
    Bayer A.
    Segarra S.
    Kyrillidis A.
    Journal of Applied and Computational Topology, 2024, 8 (5) : 1363 - 1415
  • [29] Graph Convolutional Kernel Machine versus Graph Convolutional Networks
    Wu, Zhihao
    Zhang, Zhao
    Fan, Jicong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [30] Minimal Variance Sampling with Provable Guarantees for Fast Training of Graph Neural Networks
    Cong, Weilin
    Forsati, Rana
    Kandemir, Mahmut
    Mahdavi, Mehrdad
    KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 1393 - 1403