GenKP: generative knowledge prompts for enhancing large language modelsGenKP: generative knowledge prompts for enhancing large language modelsX. Li et al.

被引:0
|
作者
Xinbai Li [1 ]
Shaowen Peng [1 ]
Shuntaro Yada [1 ]
Shoko Wakamiya [2 ]
Eiji Aramaki [1 ]
机构
[1] Nara Institute of Science and Technology,
[2] University of Tsukuba,undefined
关键词
Large language models; Knowledge graph; Knowledge prompts; In-context learning;
D O I
10.1007/s10489-025-06318-3
中图分类号
学科分类号
摘要
Large language models (LLMs) have demonstrated extensive capabilities across various natural language processing (NLP) tasks. Knowledge graphs (KGs) harbor vast amounts of facts, furnishing external knowledge for language models. The structured knowledge extracted from KGs must undergo conversion into sentences to align with the input format required by LLMs. Previous research has commonly utilized methods such as triple conversion and template-based conversion. However, sentences converted using existing methods frequently encounter issues such as semantic incoherence, ambiguity, and unnaturalness, which distort the original intent, and deviate the sentences from the facts. Meanwhile, despite the improvement that knowledge-enhanced pre-training and prompt-tuning methods have achieved in small-scale models, they are difficult to implement for LLMs in the absence of computational resources. The advanced comprehension of LLMs facilitates in-context learning (ICL), thereby enhancing their performance without the need for additional training. In this paper, we propose a knowledge prompts generation method, GenKP, which injects knowledge into LLMs by ICL. Compared to inserting triple-conversion or templated-conversion knowledge without selection, GenKP entails generating knowledge samples using LLMs in conjunction with KGs and makes a trade-off of knowledge samples through weighted verification and BM25 ranking, reducing knowledge noise. Experimental results illustrate that incorporating knowledge prompts enhances the performance of LLMs. Furthermore, LLMs augmented with GenKP exhibit superior improvements compared to the methods utilizing triple and template-based knowledge injection.
引用
收藏
相关论文
共 28 条
  • [21] KnowBug: Enhancing Large language models with bug report knowledge for deep learning framework bug prediction
    Li, Chenglong
    Zheng, Zheng
    Du, Xiaoting
    Ma, Xiangyue
    Wang, Zhengqi
    Li, Xinheng
    KNOWLEDGE-BASED SYSTEMS, 2024, 305
  • [22] A flood knowledge-constrained large language model interactable with GIS: enhancing public risk perception of floods
    Zhu, Jun
    Dang, Pei
    Cao, Yungang
    Lai, Jianbo
    Guo, Yukun
    Wang, Ping
    Li, Weilian
    INTERNATIONAL JOURNAL OF GEOGRAPHICAL INFORMATION SCIENCE, 2024, 38 (04) : 603 - 625
  • [23] Enhancing Zero-shot Audio Classification using Sound Attribute Knowledge from Large Language Models
    Xu, Xuenan
    Zhang, Pingyue
    Yang, Ming
    Zhang, Ji
    Wu, Mengyue
    INTERSPEECH 2024, 2024, : 4808 - 4812
  • [24] Logic-infused knowledge graph QA: Enhancing large language models for specialized domains through Prolog integration
    Bashir, Aneesa
    Peng, Rong
    Ding, Yongchang
    DATA & KNOWLEDGE ENGINEERING, 2025, 157
  • [25] Enhancing In-Context Learning of Large Language Models for Knowledge Graph Reasoning via Rule-and-Reinforce Selected Triples
    Wang, Shaofei
    APPLIED SCIENCES-BASEL, 2025, 15 (03):
  • [26] Enhancing text-based knowledge graph completion with zero-shot large language models: A focus on semantic enhancement
    Yang, Rui
    Zhu, Jiahao
    Man, Jianping
    Fang, Li
    Zhou, Yi
    KNOWLEDGE-BASED SYSTEMS, 2024, 300
  • [27] Enhancing multimodal-input object goal navigation by leveraging large language models for inferring room-object relationship knowledge
    Sun, Leyuan
    Kanezaki, Asako
    Caron, Guillaume
    Yoshiyasu, Yusuke
    ADVANCED ENGINEERING INFORMATICS, 2025, 65
  • [28] prompt4vis: prompting large language models with example mining for tabular data visualizationPrompt4Vis: prompting large language models with example mining...S. Li et al.
    Shuaimin Li
    Xuanang Chen
    Yuanfeng Song
    Yunze Song
    Chen Jason Zhang
    Fei Hao
    Lei Chen
    The VLDB Journal, 2025, 34 (4)