Stochastic Training of Graph Convolutional Networks with Variance Reduction

被引:0
|
作者
Chen, Jianfei [1 ]
Zhu, Jun [1 ]
Song, Le [2 ,3 ]
机构
[1] Tsinghua Univ, THBI Lab, State Key Lab Intell Tech & Sys, Dept Comp Sci & Tech,BNRist Ctr, Beijing, Peoples R China
[2] Georgia Inst Technol, Atlanta, GA 30332 USA
[3] Ant Financial, Hangzhou, Peoples R China
来源
INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80 | 2018年 / 80卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph convolutional networks (GCNs) are powerful deep neural networks for graph-structured data. However, GCN computes the representation of a node recursively from its neighbors, making the receptive field size grow exponentially with the number of layers. Previous attempts on reducing the receptive field size by subsampling neighbors do not have convergence guarantee, and their receptive field size per node is still in the order of hundreds. In this paper, we develop control variate based algorithms with new theoretical guarantee to converge to a local optimum of GCN regardless of the neighbor sampling size. Empirical results show that our algorithms enjoy similar convergence rate and model quality with the exact algorithm using only two neighbors per node. The running time of our algorithms on a large Reddit dataset is only one seventh of previous neighbor sampling algorithms.
引用
收藏
页数:9
相关论文
共 50 条
  • [41] Simplifying Graph Convolutional Networks
    Wu, Felix
    Zhang, Tianyi
    de Souza, Amauri Holanda, Jr.
    Fifty, Christopher
    Yu, Tao
    Weinberger, Kilian Q.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [42] Convolutional Graph Neural Networks
    Gama, Fernando
    Marques, Antonio G.
    Leus, Geert
    Ribeiro, Alejandro
    CONFERENCE RECORD OF THE 2019 FIFTY-THIRD ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, 2019, : 452 - 456
  • [43] Contrastive Graph Learning with Graph Convolutional Networks
    Nagendar, G.
    Sitaram, Ramachandrula
    DOCUMENT ANALYSIS SYSTEMS, DAS 2022, 2022, 13237 : 96 - 110
  • [44] FedGCN: Convergence-Communication Tradeoffs in Federated Training of Graph Convolutional Networks
    Yao, Yuhang
    Jin, Weizhao
    Ravi, Srivatsan
    Joe-Wong, Carlee
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [45] Stochastic Expectation Maximization with Variance Reduction
    Chen, Jianfei
    Zhu, Jun
    Teh, Yee Whye
    Zhang, Tong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [46] Stochastic Variance Reduction for Nonconvex Optimization
    Reddi, Sashank J.
    Hefny, Ahmed
    Sra, Suvrit
    Poczos, Barnabas
    Smola, Alex
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 48, 2016, 48
  • [47] Stochastic Convolutional Recurrent Networks
    Chien, Jen-Tzung
    Huang, Yu-Min
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [48] Improved training of deep convolutional networks via minimum-variance regularized adaptive sampling
    Rojas-Dominguez, Alfonso
    Ivvan Valdez, S.
    Ornelas-Rodriguez, Manuel
    Carpio, Martin
    SOFT COMPUTING, 2023, 27 (18) : 13237 - 13253
  • [49] Improved training of deep convolutional networks via minimum-variance regularized adaptive sampling
    Alfonso Rojas-Domínguez
    S. Ivvan Valdez
    Manuel Ornelas-Rodríguez
    Martín Carpio
    Soft Computing, 2023, 27 : 13237 - 13253
  • [50] Differentiable Graph Module (DGM) for Graph Convolutional Networks
    Kazi, Anees
    Cosmo, Luca
    Ahmadi, Seyed-Ahmad
    Navab, Nassir
    Bronstein, Michael M. M.
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (02) : 1606 - 1617