GenKP: generative knowledge prompts for enhancing large language models

被引:0
|
作者
Li, Xinbai [1 ]
Peng, Shaowen [1 ]
Yada, Shuntaro [1 ,2 ]
Wakamiya, Shoko [1 ]
Aramaki, Eiji [1 ]
机构
[1] Nara Inst Sci & Technol, 8916-5 Takayam cho, Ikoma, Nara 6300192, Japan
[2] Univ Tsukuba, Tsukuba, Ibaraki, Japan
关键词
Large language models; Knowledge graph; Knowledge prompts; In-context learning;
D O I
10.1007/s10489-025-06318-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large language models (LLMs) have demonstrated extensive capabilities across various natural language processing (NLP) tasks. Knowledge graphs (KGs) harbor vast amounts of facts, furnishing external knowledge for language models. The structured knowledge extracted from KGs must undergo conversion into sentences to align with the input format required by LLMs. Previous research has commonly utilized methods such as triple conversion and template-based conversion. However, sentences converted using existing methods frequently encounter issues such as semantic incoherence, ambiguity, and unnaturalness, which distort the original intent, and deviate the sentences from the facts. Meanwhile, despite the improvement that knowledge-enhanced pre-training and prompt-tuning methods have achieved in small-scale models, they are difficult to implement for LLMs in the absence of computational resources. The advanced comprehension of LLMs facilitates in-context learning (ICL), thereby enhancing their performance without the need for additional training. In this paper, we propose a knowledge prompts generation method, GenKP, which injects knowledge into LLMs by ICL. Compared to inserting triple-conversion or templated-conversion knowledge without selection, GenKP entails generating knowledge samples using LLMs in conjunction with KGs and makes a trade-off of knowledge samples through weighted verification and BM25 ranking, reducing knowledge noise. Experimental results illustrate that incorporating knowledge prompts enhances the performance of LLMs. Furthermore, LLMs augmented with GenKP exhibit superior improvements compared to the methods utilizing triple and template-based knowledge injection.
引用
收藏
页数:15
相关论文
共 50 条
  • [21] CRKG: combining retrieval knowledge with generative language models
    Chen, Fei
    Zhang, Carter
    Ning, Bo
    JOURNAL OF SUPERCOMPUTING, 2025, 81 (01):
  • [22] PU-GEN: Enhancing generative commonsense reasoning for language models with human-centered knowledge
    Seo, Jaehyung
    Oh, Dongsuk
    Eo, Sugyeong
    Park, Chanjun
    Yang, Kisu
    Moon, Hyeonseok
    Park, Kinam
    Lim, Heuiseok
    KNOWLEDGE-BASED SYSTEMS, 2022, 256
  • [23] AUTOPROMPT: Eliciting Knowledge from Language Models with Automatically Generated Prompts
    Shin, Taylor
    Razeghi, Yasaman
    Logan, Robert L., IV
    Wallace, Eric
    Singh, Sameer
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 4222 - 4235
  • [24] Eliciting knowledge from language models with automatically generated continuous prompts
    Chen, Yadang
    Yang, Gang
    Wang, Duolin
    Li, Dichao
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 239
  • [25] Simple Techniques for Enhancing Sentence Embeddings in Generative Language Models
    Zhang, Bowen
    Chang, Kehua
    Li, Chunping
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT III, ICIC 2024, 2024, 14877 : 52 - 64
  • [26] Wiki-based Prompts for Enhancing Relation Extraction using Language Models
    Layegh, Amirhossein
    Payberah, Amir H.
    Soylu, Ahmet
    Roman, Dumitru
    Matskin, Mihhail
    39TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2024, 2024, : 731 - 740
  • [27] Large Language Models and Generative AI, Oh My!
    Zyda, Michael
    COMPUTER, 2024, 57 (03) : 127 - 132
  • [28] Large language models for generative information extraction: a survey
    Xu, Derong
    Chen, Wei
    Peng, Wenjun
    Zhang, Chao
    Xu, Tong
    Zhao, Xiangyu
    Wu, Xian
    Zheng, Yefeng
    Wang, Yang
    Chen, Enhong
    FRONTIERS OF COMPUTER SCIENCE, 2024, 18 (06)
  • [29] Large Language Models and Generative AI, Oh My!
    Cobb, Peter J.
    ADVANCES IN ARCHAEOLOGICAL PRACTICE, 2023, 11 (03): : 363 - 369
  • [30] Predictive Prompts with Joint Training of Large Language Models for Explainable Recommendation
    Lin, Ching-Sheng
    Tsai, Chung-Nan
    Su, Shao-Tang
    Jwo, Jung-Sing
    Lee, Cheng-Hsiung
    Wang, Xin
    MATHEMATICS, 2023, 11 (20)