ReLMKG: reasoning with pre-trained language models and knowledge graphs for complex question answering

被引:10
|
作者
Cao, Xing [1 ,2 ]
Liu, Yun [1 ,2 ]
机构
[1] Beijing Jiaotong Univ, Sch Elect & Informat Engn, Beijing 100044, Peoples R China
[2] Beijing Municipal Commiss Educ, Key Lab Commun & Informat Syst, Beijing 100044, Peoples R China
关键词
Complex question answering; Pre-trained language model; Knowledge graph; Joint reasoning; WEB;
D O I
10.1007/s10489-022-04123-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The goal of complex question answering over knowledge bases (KBQA) is to find an answer entity in a knowledge graph. Recent information retrieval-based methods have focused on the topology of the knowledge graph, ignoring inconsistencies between knowledge graph embeddings and natural language embeddings, and cannot effectively utilize both implicit and explicit knowledge for reasoning. In this paper, we propose a novel model, ReLMKG, to address this challenge. This approach performs joint reasoning on a pre-trained language model and the associated knowledge graph. The complex question and textual paths are encoded by the language model, bridging the gap between the question and the knowledge graph and exploiting implicit knowledge without introducing additional unstructured text. The outputs of different layers in the language model are used as instructions to guide a graph neural network to perform message propagation and aggregation in a step-by-step manner, which utilizes the explicit knowledge contained in the structured knowledge graph. We analyse the reasoning ability of the ReLMKG model for knowledge graphs with different degrees of sparseness and evaluate the generalizability of the model. Experiments conducted on the Complex WebQuestions and WebQuestionsSP datasets demonstrate the effectiveness of our approach on KBQA tasks.
引用
收藏
页码:12032 / 12046
页数:15
相关论文
共 50 条
  • [41] Complex Temporal Question Answering on Knowledge Graphs
    Jia, Zhen
    Pramanik, Soumajit
    Roy, Rishiraj Saha
    Weikum, Gerhard
    [J]. PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 792 - 802
  • [42] Leveraging Pre-trained Language Models for Time Interval Prediction in Text-Enhanced Temporal Knowledge Graphs
    Islakoglu, Duygu Sezen
    Chekol, Melisachew Wudage
    Velegrakis, Yannis
    [J]. SEMANTIC WEB, PT I, ESWC 2024, 2024, 14664 : 59 - 78
  • [43] Annotating Columns with Pre-trained Language Models
    Suhara, Yoshihiko
    Li, Jinfeng
    Li, Yuliang
    Zhang, Dan
    Demiralp, Cagatay
    Chen, Chen
    Tan, Wang-Chiew
    [J]. PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA (SIGMOD '22), 2022, : 1493 - 1503
  • [44] LaoPLM: Pre-trained Language Models for Lao
    Lin, Nankai
    Fu, Yingwen
    Yang, Ziyu
    Chen, Chuwei
    Jiang, Shengyi
    [J]. LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 6506 - 6512
  • [45] PhoBERT: Pre-trained language models for Vietnamese
    Dat Quoc Nguyen
    Anh Tuan Nguyen
    [J]. FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 1037 - 1042
  • [46] HinPLMs: Pre-trained Language Models for Hindi
    Huang, Xixuan
    Lin, Nankai
    Li, Kexin
    Wang, Lianxi
    Gan, Suifu
    [J]. 2021 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING (IALP), 2021, : 241 - 246
  • [47] Evaluating Commonsense in Pre-Trained Language Models
    Zhou, Xuhui
    Zhang, Yue
    Cui, Leyang
    Huang, Dandan
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 9733 - 9740
  • [48] Pre-trained language models in medicine: A survey *
    Luo, Xudong
    Deng, Zhiqi
    Yang, Binxia
    Luo, Michael Y.
    [J]. ARTIFICIAL INTELLIGENCE IN MEDICINE, 2024, 154
  • [49] Probing for Hyperbole in Pre-Trained Language Models
    Schneidermann, Nina Skovgaard
    Hershcovich, Daniel
    Pedersen, Bolette Sandford
    [J]. PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-SRW 2023, VOL 4, 2023, : 200 - 211
  • [50] A medical question answering system using large language models and knowledge graphs
    Guo, Quan
    Cao, Shuai
    Yi, Zhang
    [J]. INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (11) : 8548 - 8564