ReLMKG: reasoning with pre-trained language models and knowledge graphs for complex question answering

被引:10
|
作者
Cao, Xing [1 ,2 ]
Liu, Yun [1 ,2 ]
机构
[1] Beijing Jiaotong Univ, Sch Elect & Informat Engn, Beijing 100044, Peoples R China
[2] Beijing Municipal Commiss Educ, Key Lab Commun & Informat Syst, Beijing 100044, Peoples R China
关键词
Complex question answering; Pre-trained language model; Knowledge graph; Joint reasoning; WEB;
D O I
10.1007/s10489-022-04123-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The goal of complex question answering over knowledge bases (KBQA) is to find an answer entity in a knowledge graph. Recent information retrieval-based methods have focused on the topology of the knowledge graph, ignoring inconsistencies between knowledge graph embeddings and natural language embeddings, and cannot effectively utilize both implicit and explicit knowledge for reasoning. In this paper, we propose a novel model, ReLMKG, to address this challenge. This approach performs joint reasoning on a pre-trained language model and the associated knowledge graph. The complex question and textual paths are encoded by the language model, bridging the gap between the question and the knowledge graph and exploiting implicit knowledge without introducing additional unstructured text. The outputs of different layers in the language model are used as instructions to guide a graph neural network to perform message propagation and aggregation in a step-by-step manner, which utilizes the explicit knowledge contained in the structured knowledge graph. We analyse the reasoning ability of the ReLMKG model for knowledge graphs with different degrees of sparseness and evaluate the generalizability of the model. Experiments conducted on the Complex WebQuestions and WebQuestionsSP datasets demonstrate the effectiveness of our approach on KBQA tasks.
引用
收藏
页码:12032 / 12046
页数:15
相关论文
共 50 条
  • [1] ReLMKG: reasoning with pre-trained language models and knowledge graphs for complex question answering
    Xing Cao
    Yun Liu
    [J]. Applied Intelligence, 2023, 53 : 12032 - 12046
  • [2] An empirical study of pre-trained language models in simple knowledge graph question answering
    Hu, Nan
    Wu, Yike
    Qi, Guilin
    Min, Dehai
    Chen, Jiaoyan
    Pan, Jeff Z.
    Ali, Zafar
    [J]. WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2023, 26 (05): : 2855 - 2886
  • [3] An empirical study of pre-trained language models in simple knowledge graph question answering
    Nan Hu
    Yike Wu
    Guilin Qi
    Dehai Min
    Jiaoyan Chen
    Jeff Z Pan
    Zafar Ali
    [J]. World Wide Web, 2023, 26 : 2855 - 2886
  • [4] K-AID: Enhancing Pre-trained Language Models with Domain Knowledge for Question Answering
    Sun, Fu
    Li, Feng-Lin
    Wang, Ruize
    Chen, Qianglong
    Cheng, Xingyi
    Zhang, Ji
    [J]. PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 4125 - 4134
  • [5] Pre-trained Language Model for Biomedical Question Answering
    Yoon, Wonjin
    Lee, Jinhyuk
    Kim, Donghyeon
    Jeong, Minbyul
    Kang, Jaewoo
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2019, PT II, 2020, 1168 : 727 - 740
  • [6] Commonsense Knowledge Reasoning and Generation with Pre-trained Language Models: A Survey
    Bhargava, Prajjwal
    Ng, Vincent
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 12317 - 12325
  • [7] Improving Visual Question Answering with Pre-trained Language Modeling
    Wu, Yue
    Gao, Huiyi
    Chen, Lei
    [J]. FIFTH INTERNATIONAL WORKSHOP ON PATTERN RECOGNITION, 2020, 11526
  • [8] Large-Scale Relation Learning for Question Answering over Knowledge Bases with Pre-trained Language Models
    Yam, Yuanmeng
    Li, Rumei
    Wang, Sirui
    Zhang, Hongzhi
    Zan, Daoguang
    Zhang, Fuzheng
    Wu, Wei
    Xu, Weiran
    [J]. 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 3653 - 3660
  • [9] UniRaG: Unification, Retrieval, and Generation for Multimodal Question Answering With Pre-Trained Language Models
    Lim, Qi Zhi
    Lee, Chin Poo
    Lim, Kian Ming
    Samingan, Ahmad Kamsani
    [J]. IEEE ACCESS, 2024, 12 : 71505 - 71519
  • [10] QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering
    Yasunaga, Michihiro
    Ren, Hongyu
    Bosselut, Antoine
    Liang, Percy
    Leskovec, Jure
    [J]. 2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 535 - 546