Large language models can better understand knowledge graphs than we thought

被引:0
|
作者
Dai, Xinbang [1 ]
Hua, Yuncheng [2 ]
Wu, Tongtong [3 ]
Sheng, Yang [4 ]
Ji, Qiu [4 ]
Qi, Guilin [1 ]
机构
[1] Southeast Univ, Nanjing, Jiangsu, Peoples R China
[2] Univ New South Wales, Sydney, NSW, Australia
[3] Monash Univ, Melbourne, Vic, Australia
[4] Nanjing Univ Posts & Telecommun, Nanjing, Jiangsu, Peoples R China
关键词
Knowledge graph; Large language model;
D O I
10.1016/j.knosys.2025.113060
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
When we integrate factual knowledge from knowledge graphs (KGs) into large language models (LLMs) to enhance their performance, the cost of injection through training increases with the scale of the models. Consequently, there is significant interest in developing prompt strategies that effectively incorporate KG information into LLMs. However, the community has not yet comprehensively understood how LLMs process and interpret KG information in different input formats and organizations within prompts, and researchers often rely on trial and error. To address this gap, we design extensive experiments to empirically study LLMs' comprehension of different KG prompts. At the literal level, we reveal LLMs' preferences for various input formats (from linearized triples to fluent natural language text). At the attention distribution level, we discuss the underlying mechanisms driving these preferences. We then investigate how the organization of structured knowledge impacts LLMs and evaluate LLMs' robustness in processing and utilizing KG information in practical scenarios. Our experiments show that (1) linearized triples are more effective than fluent NL text in helping LLMs understand KG information and answer fact-intensive questions; (2) Different LLMs exhibit varying preferences for different organizational formats of triples; (3) LLMs with larger scales are more susceptible to noisy, incomplete subgraphs.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Do large language models "understand" their knowledge?
    Venkatasubramanian, Venkat
    AICHE JOURNAL, 2025, 71 (03)
  • [2] Can large language models understand molecules?
    Sadeghi, Shaghayegh
    Bui, Alan
    Forooghi, Ali
    Lu, Jianguo
    Ngom, Alioune
    BMC BIOINFORMATICS, 2024, 25 (01):
  • [3] Large Language Models With Holistically Thought Could Be Better Doctors
    Weng, Yixuan
    Li, Bin
    Xia, Fei
    Zhu, Minjun
    Sun, Bin
    He, Shizhu
    Liu, Shengping
    Li, Kang
    Li, Shutao
    Zhao, Jun
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT II, NLPCC 2024, 2025, 15360 : 319 - 332
  • [4] Reference is better than we thought
    Richardson, JV
    LIBRARY JOURNAL, 2002, 127 (07) : 41 - 42
  • [5] Unifying Large Language Models and Knowledge Graphs: A Roadmap
    Pan, Shirui
    Luo, Linhao
    Wang, Yufei
    Chen, Chen
    Wang, Jiapu
    Wu, Xindong
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (07) : 3580 - 3599
  • [6] Can We Edit Multimodal Large Language Models?
    Cheng, Siyuan
    Tian, Bozhong
    Liu, Qingbin
    Chen, Xi
    Wang, Yongheng
    Chen, Huajun
    Zhang, Ningyu
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 13877 - 13888
  • [7] Workshop on Enterprise Knowledge Graphs using Large Language Models
    Gupta, Rajeev
    Srinivasa, Srinath
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 5271 - 5272
  • [8] Knowledge Graphs, Large Language Models, and Hallucinations: An NLP Perspective
    Lavrinovics, Ernests
    Biswas, Russa
    Bjerva, Johannes
    Hose, Katja
    JOURNAL OF WEB SEMANTICS, 2025, 85
  • [9] WERE BETTER AT TEAMWORK THAN WE THOUGHT
    LAABS, JJ
    PERSONNEL JOURNAL, 1993, 72 (06) : 103 - 103
  • [10] Can we better understand strabismus research?
    Gou, Suqing
    Chen, Jiexiao
    Chen, Jieling
    EYE, 2025, 39 (04) : 798 - 799