Learning on Streaming Graphs with Experience Replay

被引:4
|
作者
Perini, Massimo [1 ]
Ramponi, Giorgia [2 ]
Carbone, Paris [3 ]
Kalavri, Vasiliki [4 ]
机构
[1] Univ Edinburgh, Edinburgh, Midlothian, Scotland
[2] Swiss Fed Inst Technol, Zurich, Switzerland
[3] KTH Royal Inst Technol, RISE Res Inst Sweden, Stockholm, Sweden
[4] Boston Univ, Boston, MA 02215 USA
关键词
graph convolutional networks; online learning; streaming graphs;
D O I
10.1145/3477314.3507113
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Graph Neural Networks (GNNs) have recently achieved good performance in many predictive tasks involving graph-structured data. However, the majority of existing models consider static graphs only and do not support training on graph streams. While inductive representation learning can generate predictions for unseen vertices, these are only accurate if the learned graph structure and properties remain stable over time. In this paper, we study the problem of employing experience replay to enable continuous graph representation learning in the streaming setting. We propose two online training methods, Random-Based Rehearsal-RBR, and Priority-Based Rehearsal-PBR, which avoid retraining from scratch when changes occur. Our algorithms are the first streaming GNN models capable of scaling to million-edge graphs with low training latency and without compromising accuracy. We evaluate the accuracy and training performance of these experience replay methods on the node classification problem using real-world streaming graphs of various sizes and domains. Our results demonstrate that PBR and RBR achieve orders of magnitude faster training as compared to offline methods while providing high accuracy and resiliency to concept drift.
引用
收藏
页码:470 / 478
页数:9
相关论文
共 50 条
  • [1] Memory Efficient Experience Replay for Streaming Learning
    Hayes, Tyler L.
    Cahill, Nathan D.
    Kanan, Christopher
    [J]. 2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 9769 - 9776
  • [2] Streaming Linear System Identification with Reverse Experience Replay
    Jain, Prateek
    Kowshik, Suhas S.
    Nagaraj, Dheeraj
    Netrapalli, Praneeth
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,
  • [3] A divided and prioritized experience replay approach for streaming regression
    Arno, Mikkel Leite
    Godhavn, John-Morten
    Aamo, Ole Morten
    [J]. METHODSX, 2021, 8
  • [4] Experience Replay for Continual Learning
    Rolnick, David
    Ahuja, Arun
    Schwarz, Jonathan
    Lillicrap, Timothy P.
    Wayne, Greg
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [5] Autonomous reinforcement learning with experience replay
    Wawrzynski, Pawel
    Tanwani, Ajay Kumar
    [J]. NEURAL NETWORKS, 2013, 41 : 156 - 167
  • [6] Selective Experience Replay for Lifelong Learning
    Isele, David
    Cosgun, Akansel
    [J]. THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 3302 - 3309
  • [7] SELECTIVE EXPERIENCE REPLAY IN REINFORCEMENT LEARNING FOR REIDENTIFICATION
    Thakoor, Ninad
    Bhanu, Bir
    [J]. 2016 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2016, : 4250 - 4254
  • [8] An analysis of experience replay in temporal difference learning
    Cichosz, P
    [J]. CYBERNETICS AND SYSTEMS, 1999, 30 (05) : 341 - 363
  • [9] Experience replay is associated with efficient nonlocal learning
    Liu, Yunzhe
    Mattar, Marcelo G.
    Behrens, Timothy E. J.
    Daw, Nathaniel D.
    Dolan, Raymond J.
    [J]. SCIENCE, 2021, 372 (6544) : 807 - +
  • [10] Coordinating Experience Replay: A Harmonious Experience Retention approach for Continual Learning
    Ji, Zhong
    Liu, Jiayi
    Wang, Qiang
    Zhang, Zhongfei
    [J]. KNOWLEDGE-BASED SYSTEMS, 2021, 234