Learning to Sample and Aggregate: Few-shot Reasoning over Temporal Knowledge Graphs

被引:0
|
作者
Wang, Ruijie [1 ]
Li, Zheng [2 ]
Sun, Dachun [1 ]
Liu, Shengzhong [1 ]
Li, Jinning [1 ]
Yin, Bing [2 ]
Abdelzaher, Tarek [1 ]
机构
[1] Univ Illinois, Champaign, IL 61820 USA
[2] Amazon Com Inc, Santa Clara, CA USA
关键词
NETWORK;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we investigate a realistic but underexplored problem, called few-shot temporal knowledge graph reasoning, that aims to predict future facts for newly emerging entities based on extremely limited observations in evolving graphs. It offers practical value in applications that need to derive instant new knowledge about new entities in temporal knowledge graphs (TKGs) with minimal supervision. The challenges mainly come from the few-shot and time shift properties of new entities. First, the limited observations associated with them are insufficient for training a model from scratch. Second, the potentially dynamic distributions from the initially observable facts to the future facts ask for explicitly modeling the evolving characteristics of new entities. We correspondingly propose a novel Meta Temporal Knowledge Graph Reasoning (MetaTKGR) framework. Unlike prior work that relies on rigid neighborhood aggregation schemes to enhance low-data entity representation, MetaTKGR dynamically adjusts the strategies of sampling and aggregating neighbors from recent facts for new entities, through temporally supervised signals on future facts as instant feedback. Besides, such a meta temporal reasoning procedure goes beyond existing meta-learning paradigms on static knowledge graphs that fail to handle temporal adaptation with large entity variance. We further provide a theoretical analysis and propose a temporal adaptation regularizer to stabilize the meta temporal reasoning over time. Empirically, extensive experiments on three real-world TKGs demonstrate the superiority of MetaTKGR over state-of-the-art baselines by a large margin.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] Spatial reasoning for few-shot object detection
    Kim, Geonuk
    Jung, Hong-Gyu
    Lee, Seong-Whan
    PATTERN RECOGNITION, 2021, 120
  • [42] Combat data shift in few-shot learning with knowledge graph
    Zhu, Yongchun
    Zhuang, Fuzhen
    Zhang, Xiangliang
    Qi, Zhiyuan
    Shi, Zhiping
    Cao, Juan
    He, Qing
    FRONTIERS OF COMPUTER SCIENCE, 2023, 17 (01)
  • [43] Prompting-to-Distill Semantic Knowledge for Few-Shot Learning
    Ji, Hong
    Gao, Zhi
    Ren, Jinchang
    Wang, Xing-ao
    Gao, Tianyi
    Sun, Wenbo
    Ma, Ping
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2024, 21
  • [44] Combat data shift in few-shot learning with knowledge graph
    Yongchun Zhu
    Fuzhen Zhuang
    Xiangliang Zhang
    Zhiyuan Qi
    Zhiping Shi
    Juan Cao
    Qing He
    Frontiers of Computer Science, 2023, 17
  • [45] Knowledge Guided Metric Learning for Few-Shot Text Classification
    Sui, Dianbo
    Chen, Yubo
    Mao, Binjie
    Qiu, Delai
    Liu, Kang
    Zhao, Jun
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 3266 - 3271
  • [46] KAN: KNOWLEDGE-AUGMENTED NETWORKS FOR FEW-SHOT LEARNING
    Zhu, Zeyang
    Lin, Xin
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 1735 - 1739
  • [47] Fractal Few-Shot Learning
    Zhou, Fobao
    Huang, Wenkai
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 35 (11) : 1 - 15
  • [48] Survey on Few-shot Learning
    Zhao K.-L.
    Jin X.-L.
    Wang Y.-Z.
    Ruan Jian Xue Bao/Journal of Software, 2021, 32 (02): : 349 - 369
  • [49] Variational Few-Shot Learning
    Zhang, Jian
    Zhao, Chenglong
    Ni, Bingbing
    Xu, Minghao
    Yang, Xiaokang
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 1685 - 1694
  • [50] Fractal Few-Shot Learning
    Zhou, Fobao
    Huang, Wenkai
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (11) : 16353 - 16367