Stochastic Training of Graph Convolutional Networks with Variance Reduction

被引:0
|
作者
Chen, Jianfei [1 ]
Zhu, Jun [1 ]
Song, Le [2 ,3 ]
机构
[1] Tsinghua Univ, THBI Lab, State Key Lab Intell Tech & Sys, Dept Comp Sci & Tech,BNRist Ctr, Beijing, Peoples R China
[2] Georgia Inst Technol, Atlanta, GA 30332 USA
[3] Ant Financial, Hangzhou, Peoples R China
来源
INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80 | 2018年 / 80卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph convolutional networks (GCNs) are powerful deep neural networks for graph-structured data. However, GCN computes the representation of a node recursively from its neighbors, making the receptive field size grow exponentially with the number of layers. Previous attempts on reducing the receptive field size by subsampling neighbors do not have convergence guarantee, and their receptive field size per node is still in the order of hundreds. In this paper, we develop control variate based algorithms with new theoretical guarantee to converge to a local optimum of GCN regardless of the neighbor sampling size. Empirical results show that our algorithms enjoy similar convergence rate and model quality with the exact algorithm using only two neighbors per node. The running time of our algorithms on a large Reddit dataset is only one seventh of previous neighbor sampling algorithms.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Edge convolutional networks: Decomposing graph convolutional networks for stochastic training with independent edges
    Luo, Yi
    Huang, Yan
    Luo, Guangchun
    Qin, Ke
    Chen, Aiguo
    NEUROCOMPUTING, 2023, 549
  • [2] Rotation Variance in Graph Convolutional Networks
    Nguyen Anh Mac
    Hung Son Nguyen
    PROCEEDINGS OF THE 2021 16TH CONFERENCE ON COMPUTER SCIENCE AND INTELLIGENCE SYSTEMS (FEDCSIS), 2021, : 81 - 90
  • [3] Distributed Training of Graph Convolutional Networks
    Scardapane, Simone
    Spinelli, Indro
    Di Lorenzo, Paolo
    IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, 2021, 7 : 87 - 100
  • [4] Learning Stochastic Graph Neural Networks With Constrained Variance
    Gao, Zhan
    Isufi, Elvin
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2023, 71 : 358 - 371
  • [5] Robust graph convolutional networks with directional graph adversarial training
    Hu, Weibo
    Chen, Chuan
    Chang, Yaomin
    Zheng, Zibin
    Du, Yunfei
    APPLIED INTELLIGENCE, 2021, 51 (11) : 7812 - 7826
  • [6] Robust graph convolutional networks with directional graph adversarial training
    Weibo Hu
    Chuan Chen
    Yaomin Chang
    Zibin Zheng
    Yunfei Du
    Applied Intelligence, 2021, 51 : 7812 - 7826
  • [7] Stability of graph convolutional neural networks to stochastic perturbations
    Gao, Zhan
    Isufi, Elvin
    Ribeiro, Alejandro
    SIGNAL PROCESSING, 2021, 188
  • [8] Stochastic Weight Completion for Road Networks using Graph Convolutional Networks
    Hu, Jilin
    Guo, Chenjuan
    Yang, Bin
    Jensen, Christian S.
    2019 IEEE 35TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2019), 2019, : 1274 - 1285
  • [9] VARIANCE-CONSTRAINED LEARNING FOR STOCHASTIC GRAPH NEURAL NETWORKS
    Gao, Zhan
    Isufi, Elvin
    Ribeiro, Alejandro
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 5245 - 5249
  • [10] Generalization Guarantee of Training Graph Convolutional Networks with Graph Topology Sampling
    Li, Hongkang
    Wang, Meng
    Liu, Sijia
    Chen, Pin-Yu
    Xiong, Jinjun
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,