Low-resource knowledge graph completion based on knowledge distillation driven by large language models

被引:0
|
作者
机构
[1] Hou, Wenlong
[2] Zhao, Weidong
[3] Jia, Ning
[4] Liu, Xianhui
关键词
Graph embeddings;
D O I
10.1016/j.asoc.2024.112622
中图分类号
学科分类号
摘要
Knowledge graph completion (KGC) refines the existing knowledge graph (KG) by predicting missing entities or relations. Existing methods are mainly based on embeddings or texts but only perform better with abundant labeled data. Hence, KGC in resource-constrained settings is a significant problem, which faces challenges of data imbalance across relations and lack of relation label semantics. Considering that Large Language Models (LLMs) demonstrate powerful reasoning and generation capabilities, this work proposes an LLM-driven Knowledge Graph Completion Distillation (KGCD) model to address low-resource KGC. A two-stage framework is developed, involving teacher-student distillation by using LLM to improve reasoning, followed by fine-tuning on real-world low-resource datasets. To deal with data imbalance, a hybrid prompt design for LLM is proposed, which includes rethink and open prompts. Furthermore, a virtual relation label generation strategy enhances the model's understanding of triples. Extensive experiments on three benchmarks have shown that KGCD's effectiveness for low-resource KGC, achieving improvements in Mean Reciprocal Rank (MRR) by 11% and Hits@1 by 10% on the WN18, MRR by 10% and Hits@1 by 14% on the WN18RR, and MRR by 12% and Hits@1 by 11% on the YAGO3-10. © 2024 Elsevier B.V.
引用
收藏
相关论文
共 50 条
  • [21] Rule-based Knowledge Graph Completion with Canonical Models
    Ott, Simon
    Betz, Patrick
    Stepanova, Daria
    Gad-Elrab, Mohamed H.
    Meilicke, Christian
    Stuckenschmidt, Heiner
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 1971 - 1981
  • [22] Progressive Distillation Based on Masked Generation Feature Method for Knowledge Graph Completion
    Fan, Cunhang
    Chen, Yujie
    Xue, Jun
    Kong, Yonghui
    Tao, Jianhua
    Lv, Zhao
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 8, 2024, : 8380 - 8388
  • [23] Low-resource Low-footprint Wake-word Detection using Knowledge Distillation
    Ghosh, Arindam
    Fuhs, Mark
    Bagchi, Deblin
    Farahani, Bahman
    Woszczyna, Monika
    INTERSPEECH 2022, 2022, : 3739 - 3743
  • [24] Fusing topology contexts and logical rules in language models for knowledge graph completion
    Lin, Qika
    Mao, Rui
    Liu, Jun
    Xu, Fangzhi
    Cambria, Erik
    INFORMATION FUSION, 2023, 90 : 253 - 264
  • [25] Connecting AI: Merging Large Language Models and Knowledge Graph
    Jovanovic, Mladan
    Campbell, Mark
    COMPUTER, 2023, 56 (11) : 103 - 108
  • [26] Enhancing text-based knowledge graph completion with zero-shot large language models: A focus on semantic enhancement
    Yang, Rui
    Zhu, Jiahao
    Man, Jianping
    Fang, Li
    Zhou, Yi
    KNOWLEDGE-BASED SYSTEMS, 2024, 300
  • [27] Improving Low-Resource Neural Machine Translation With Teacher-Free Knowledge Distillation
    Zhang, Xinlu
    Li, Xiao
    Yang, Yating
    Dong, Rui
    IEEE ACCESS, 2020, 8 : 206638 - 206645
  • [28] Web Content Filtering through knowledge distillation of Large Language Models
    Voros, Tamas
    Bergeron, Sean Paul
    Berlin, Konstantin
    2023 IEEE INTERNATIONAL CONFERENCE ON WEB INTELLIGENCE AND INTELLIGENT AGENT TECHNOLOGY, WI-IAT, 2023, : 357 - 361
  • [29] Large Language Models and Low-Resource Languages: An Examination of Armenian NLP
    Avetisyan, Hayastan
    Broneske, David
    13TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING AND THE 3RD CONFERENCE OF THE ASIA-PACIFIC CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, IJCNLP-AACL 2023, 2023, : 199 - 210
  • [30] SimKGC: Simple Contrastive Knowledge Graph Completion with Pre-trained Language Models
    Wang, Liang
    Zhao, Wei
    Wei, Zhuoyu
    Liu, Jingming
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 4281 - 4294