Large language models can better understand knowledge graphs than we thought

被引:0
|
作者
Dai, Xinbang [1 ]
Hua, Yuncheng [2 ]
Wu, Tongtong [3 ]
Sheng, Yang [4 ]
Ji, Qiu [4 ]
Qi, Guilin [1 ]
机构
[1] Southeast Univ, Nanjing, Jiangsu, Peoples R China
[2] Univ New South Wales, Sydney, NSW, Australia
[3] Monash Univ, Melbourne, Vic, Australia
[4] Nanjing Univ Posts & Telecommun, Nanjing, Jiangsu, Peoples R China
关键词
Knowledge graph; Large language model;
D O I
10.1016/j.knosys.2025.113060
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
When we integrate factual knowledge from knowledge graphs (KGs) into large language models (LLMs) to enhance their performance, the cost of injection through training increases with the scale of the models. Consequently, there is significant interest in developing prompt strategies that effectively incorporate KG information into LLMs. However, the community has not yet comprehensively understood how LLMs process and interpret KG information in different input formats and organizations within prompts, and researchers often rely on trial and error. To address this gap, we design extensive experiments to empirically study LLMs' comprehension of different KG prompts. At the literal level, we reveal LLMs' preferences for various input formats (from linearized triples to fluent natural language text). At the attention distribution level, we discuss the underlying mechanisms driving these preferences. We then investigate how the organization of structured knowledge impacts LLMs and evaluate LLMs' robustness in processing and utilizing KG information in practical scenarios. Our experiments show that (1) linearized triples are more effective than fluent NL text in helping LLMs understand KG information and answer fact-intensive questions; (2) Different LLMs exhibit varying preferences for different organizational formats of triples; (3) LLMs with larger scales are more susceptible to noisy, incomplete subgraphs.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] The time course of attention - It is better than we thought
    Olivers, Christian N. L.
    CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE, 2007, 16 (01) : 11 - 15
  • [22] Can Large Language Models Understand Real-World Complex Instructions?
    He, Qianyu
    Zeng, Jie
    Huang, Wenhao
    Chen, Lina
    Xiao, Jin
    He, Qianxi
    Zhou, Xunzhe
    Liang, Jiaqing
    Xiao, Yanghua
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 16, 2024, : 18188 - 18196
  • [23] KG-CoT: Chain-of-Thought Prompting of Large Language Models over Knowledge Graphs for Knowledge-Aware Question Answering
    Zhao, Ruilin
    Zhao, Feng
    Wang, Long
    Wang, Xianzhi
    Xu, Guandong
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 6642 - 6650
  • [24] How can we understand it? Theoretical models
    Ayers, S.
    JOURNAL OF PSYCHOSOMATIC OBSTETRICS AND GYNECOLOGY, 2010, 31 : 6 - 6
  • [25] We Can Do Better than "Adolescence"
    Anderson-Nathe, Ben
    Charles, Grant
    CHILD & YOUTH SERVICES, 2019, 40 (03) : 221 - 223
  • [26] Can We Do Better Than Dobutamine?
    McNally, Elizabeth M.
    CIRCULATION RESEARCH, 2013, 113 (04) : 355 - 357
  • [27] Can We Do Better Than a Colectomy?
    Lee, Sang W.
    DISEASES OF THE COLON & RECTUM, 2022, 65 (05) : 613 - 614
  • [28] CAN WE ALL BE BETTER THAN AVERAGE
    MYERS, DG
    RIDL, J
    PSYCHOLOGY TODAY, 1979, 13 (03) : 89 - &
  • [29] Can Language Models Understand Physical Concepts?
    Li, Lei
    Xu, Jingjing
    Dong, Qingxiu
    Zheng, Ce
    Sun, Xu
    Kong, Lingpeng
    Liu, Qi
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 11843 - 11861