GenKP: generative knowledge prompts for enhancing large language models

被引:0
|
作者
Li, Xinbai [1 ]
Peng, Shaowen [1 ]
Yada, Shuntaro [1 ,2 ]
Wakamiya, Shoko [1 ]
Aramaki, Eiji [1 ]
机构
[1] Nara Inst Sci & Technol, 8916-5 Takayam cho, Ikoma, Nara 6300192, Japan
[2] Univ Tsukuba, Tsukuba, Ibaraki, Japan
关键词
Large language models; Knowledge graph; Knowledge prompts; In-context learning;
D O I
10.1007/s10489-025-06318-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large language models (LLMs) have demonstrated extensive capabilities across various natural language processing (NLP) tasks. Knowledge graphs (KGs) harbor vast amounts of facts, furnishing external knowledge for language models. The structured knowledge extracted from KGs must undergo conversion into sentences to align with the input format required by LLMs. Previous research has commonly utilized methods such as triple conversion and template-based conversion. However, sentences converted using existing methods frequently encounter issues such as semantic incoherence, ambiguity, and unnaturalness, which distort the original intent, and deviate the sentences from the facts. Meanwhile, despite the improvement that knowledge-enhanced pre-training and prompt-tuning methods have achieved in small-scale models, they are difficult to implement for LLMs in the absence of computational resources. The advanced comprehension of LLMs facilitates in-context learning (ICL), thereby enhancing their performance without the need for additional training. In this paper, we propose a knowledge prompts generation method, GenKP, which injects knowledge into LLMs by ICL. Compared to inserting triple-conversion or templated-conversion knowledge without selection, GenKP entails generating knowledge samples using LLMs in conjunction with KGs and makes a trade-off of knowledge samples through weighted verification and BM25 ranking, reducing knowledge noise. Experimental results illustrate that incorporating knowledge prompts enhances the performance of LLMs. Furthermore, LLMs augmented with GenKP exhibit superior improvements compared to the methods utilizing triple and template-based knowledge injection.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] Systematic synthesis of design prompts for large language models in conceptual design
    Tian, Yu
    Liu, Ang
    Dai, Yun
    Nagato, Keisuke
    Nakao, Masayuki
    CIRP ANNALS-MANUFACTURING TECHNOLOGY, 2024, 73 (01) : 85 - 88
  • [32] Designing prompts and creating cleaned scientific text for Retrieval Augmented Generation for more precise responses from generative large language models
    Lakatos, Robert
    Urban, Eszter Klara
    Szabo, Zoltan Janos
    Pozsga, Janos
    Csernai, Eszter
    Hajdu, Andras
    2024 IEEE 3RD CONFERENCE ON INFORMATION TECHNOLOGY AND DATA SCIENCE, CITDS 2024, 2024, : 123 - 128
  • [33] Enhancing Persona Consistency with Large Language Models
    Shi, Haozhe
    Niu, Kun
    2024 5TH INTERNATIONAL CONFERENCE ON COMPUTING, NETWORKS AND INTERNET OF THINGS, CNIOT 2024, 2024, : 210 - 215
  • [34] Enhancing Conversational Search with Large Language Models
    Rocchietti, Guido
    Muntean, Cristina Ioana
    Nardini, Franco Maria
    ERCIM NEWS, 2024, (136): : 33 - 34
  • [35] Enhancing Technological Taxonomies by Large Language Models
    Barba, Giuliana
    Lazoi, Mariangela
    Lezzi, Marianna
    HUMAN-CENTRED TECHNOLOGY MANAGEMENT FOR A SUSTAINABLE FUTURE, VOL 2, IAMOT, 2025, : 109 - 117
  • [36] Infusing internalized knowledge of language models into hybrid prompts for knowledgeable dialogue generation
    Bai, Jiaqi
    Yan, Zhao
    Zhang, Shun
    Yang, Jian
    Guo, Hongcheng
    Li, Zhoujun
    KNOWLEDGE-BASED SYSTEMS, 2024, 296
  • [37] Large Language Models Can Connect the Dots: Exploring Model Optimization Bugs with Domain Knowledge-Aware Prompts
    Guan, Hao
    Bai, Guangdong
    Liu, Yepang
    PROCEEDINGS OF THE 33RD ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2024, 2024, : 1579 - 1591
  • [38] Give us the Facts: Enhancing Large Language Models With Knowledge Graphs for Fact-Aware Language Modeling
    Yang, Linyao
    Chen, Hongyang
    Li, Zhao
    Ding, Xiao
    Wu, Xindong
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (07) : 3091 - 3110
  • [39] Enhancing Clinical Trial Summarization: Leveraging Large Language Models and Knowledge Graphs for Entity Preservation
    Nahed, Pouyan
    Kambar, Mina Esmail Zadeh Nojoo
    Taghva, Kazem
    PROCEEDINGS OF NINTH INTERNATIONAL CONGRESS ON INFORMATION AND COMMUNICATION TECHNOLOGY, ICICT 2024, VOL 7, 2024, 1003 : 325 - 336
  • [40] Prompts and Large Language Models: A New Tool for Drafting, Reviewing and Interpreting Contracts?
    Wang, Brydon T.
    LAW TECHNOLOGY AND HUMANS, 2024, 6 (02): : 88 - 106