Efficient Graph Generation with Graph Recurrent Attention Networks

被引:0
|
作者
Liao, Renjie [1 ,2 ,3 ]
Li, Yujia [4 ]
Song, Yang [5 ]
Wang, Shenlong [1 ,2 ,3 ]
Hamilton, William L. [6 ,7 ]
Duvenaud, David [1 ,3 ]
Urtasun, Raquel [1 ,2 ,3 ]
Zemel, Richard [1 ,3 ,8 ]
机构
[1] Univ Toronto, Toronto, ON, Canada
[2] Uber ATG Toronto, Toronto, ON, Canada
[3] Vector Inst, Toronto, ON, Canada
[4] DeepMind, London, England
[5] Stanford Univ, Stanford, CA 94305 USA
[6] McGill Univ, Montreal, PQ, Canada
[7] Mila Quebec Artificial Intelligence Inst, Montreal, PQ, Canada
[8] Canadian Inst Adv Res, Toronto, ON, Canada
关键词
EVOLUTION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose a new family of efficient and expressive deep generative models of graphs, called Graph Recurrent Attention Networks (GRANs). Our model generates graphs one block of nodes and associated edges at a time. The block size and sampling stride allow us to trade off sample quality for efficiency. Compared to previous RNN-based graph generative models, our framework better captures the auto-regressive conditioning between the already-generated and to-be-generated parts of the graph using Graph Neural Networks (GNNs) with attention. This not only reduces the dependency on node ordering but also bypasses the long-term bottleneck caused by the sequential nature of RNNs. Moreover, we parameterize the output distribution per block using a mixture of Bernoulli, which captures the correlations among generated edges within the block. Finally, we propose to handle node orderings in generation by marginalizing over a family of canonical orderings. On standard benchmarks, we achieve state-of-the-art time efficiency and sample quality compared to previous models. Additionally, we show our model is capable of generating large graphs of up to 5K nodes with good quality.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Hierarchical recurrent neural networks for graph generation
    Song Xianduo
    Wang Xin
    Song Yuyuan
    Zuo Xianglin
    Wang Ying
    [J]. INFORMATION SCIENCES, 2022, 589 : 250 - 264
  • [2] SEA: Graph Shell Attention in Graph Neural Networks
    Frey, Christian M. M.
    Ma, Yunpu
    Schubert, Matthias
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2022, PT II, 2023, 13714 : 326 - 343
  • [3] Graph Ordering Attention Networks
    Chatzianastasis, Michail
    Lutzeyer, Johannes
    Dasoulas, George
    Vazirgiannis, Michalis
    [J]. THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 6, 2023, : 7006 - 7014
  • [4] Sparse Graph Attention Networks
    Ye, Yang
    Ji, Shihao
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (01) : 905 - 916
  • [5] Graph Oriented Attention Networks
    Amine, Ouardi
    Mestari, Mohammed
    [J]. IEEE ACCESS, 2024, 12 : 47057 - 47067
  • [6] Signed Graph Attention Networks
    Huang, Junjie
    Shen, Huawei
    Hou, Liang
    Cheng, Xueqi
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: WORKSHOP AND SPECIAL SESSIONS, 2019, 11731 : 566 - 577
  • [7] Combining Graph and Recurrent Networks for Efficient and Effective Segment Tagging
    Montero, David
    Javier Yebes, J.
    [J]. LEARNING ON GRAPHS CONFERENCE, VOL 198, 2022, 198
  • [8] Line Graph Enhanced AMR-to-Text Generation with Mix-Order Graph Attention Networks
    Zhao, Yanbin
    Chen, Lu
    Chen, Zhi
    Cao, Ruisheng
    Zhu, Su
    Yu, Kai
    [J]. 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 732 - 741
  • [9] A REGULARIZED ATTENTION MECHANISM FOR GRAPH ATTENTION NETWORKS
    Shanthamallu, Uday Shankar
    Jayaraman, J. Thiagarajan
    Spanias, Andreas
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 3372 - 3376
  • [10] Knowledge Graph Embedding via Graph Attenuated Attention Networks
    Wang, Rui
    Li, Bicheng
    Hu, Shengwei
    Du, Wenqian
    Zhang, Min
    [J]. IEEE ACCESS, 2020, 8 : 5212 - 5224