ReLMKG: reasoning with pre-trained language models and knowledge graphs for complex question answering

被引:10
|
作者
Cao, Xing [1 ,2 ]
Liu, Yun [1 ,2 ]
机构
[1] Beijing Jiaotong Univ, Sch Elect & Informat Engn, Beijing 100044, Peoples R China
[2] Beijing Municipal Commiss Educ, Key Lab Commun & Informat Syst, Beijing 100044, Peoples R China
关键词
Complex question answering; Pre-trained language model; Knowledge graph; Joint reasoning; WEB;
D O I
10.1007/s10489-022-04123-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The goal of complex question answering over knowledge bases (KBQA) is to find an answer entity in a knowledge graph. Recent information retrieval-based methods have focused on the topology of the knowledge graph, ignoring inconsistencies between knowledge graph embeddings and natural language embeddings, and cannot effectively utilize both implicit and explicit knowledge for reasoning. In this paper, we propose a novel model, ReLMKG, to address this challenge. This approach performs joint reasoning on a pre-trained language model and the associated knowledge graph. The complex question and textual paths are encoded by the language model, bridging the gap between the question and the knowledge graph and exploiting implicit knowledge without introducing additional unstructured text. The outputs of different layers in the language model are used as instructions to guide a graph neural network to perform message propagation and aggregation in a step-by-step manner, which utilizes the explicit knowledge contained in the structured knowledge graph. We analyse the reasoning ability of the ReLMKG model for knowledge graphs with different degrees of sparseness and evaluate the generalizability of the model. Experiments conducted on the Complex WebQuestions and WebQuestionsSP datasets demonstrate the effectiveness of our approach on KBQA tasks.
引用
收藏
页码:12032 / 12046
页数:15
相关论文
共 50 条
  • [31] VQAttack: Transferable Adversarial Attacks on Visual Question Answering via Pre-trained Models
    Yin, Ziyi
    Ye, Muchao
    Zhang, Tianrong
    Wang, Jiaqi
    Liu, Han
    Chen, Jinghui
    Wang, Ting
    Ma, Fenglong
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 7, 2024, : 6755 - 6763
  • [32] DeFormer: Decomposing Pre-trained Transformers for Faster Question Answering
    Cao, Qingqing
    Trivedi, Harsh
    Balasubramanian, Aruna
    Balasubramanian, Niranjan
    [J]. 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 4487 - 4497
  • [33] Pre-Trained Language Models and Their Applications
    Wang, Haifeng
    Li, Jiwei
    Wu, Hua
    Hovy, Eduard
    Sun, Yu
    [J]. ENGINEERING, 2023, 25 (51-65) : 51 - 65
  • [34] An improving reasoning network for complex question answering over temporal knowledge graphs
    Songlin Jiao
    Zhenfang Zhu
    Wenqing Wu
    Zicheng Zuo
    Jiangtao Qi
    Wenling Wang
    Guangyuan Zhang
    Peiyu Liu
    [J]. Applied Intelligence, 2023, 53 : 8195 - 8208
  • [35] An improving reasoning network for complex question answering over temporal knowledge graphs
    Jiao, Songlin
    Zhu, Zhenfang
    Wu, Wenqing
    Zuo, Zicheng
    Qi, Jiangtao
    Wang, Wenling
    Zhang, Guangyuan
    Liu, Peiyu
    [J]. APPLIED INTELLIGENCE, 2023, 53 (07) : 8195 - 8208
  • [36] Pre-trained language models with domain knowledge for biomedical extractive summarization
    Xie, Qianqian
    Bishop, Jennifer Amy
    Tiwari, Prayag
    Ananiadou, Sophia
    [J]. Knowledge-Based Systems, 2022, 252
  • [37] Plug-and-Play Knowledge Injection for Pre-trained Language Models
    Zhang, Zhengyan
    Zeng, Zhiyuan
    Lin, Yankai
    Wang, Huadong
    Ye, Deming
    Xiao, Chaojun
    Han, Xu
    Liu, Zhiyuan
    Li, Peng
    Sun, Maosong
    Zhou, Jie
    [J]. PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 10641 - 10656
  • [38] Knowledge Base Grounded Pre-trained Language Models via Distillation
    Sourty, Raphael
    Moreno, Jose G.
    Servant, Francois-Paul
    Tamine, Lynda
    [J]. 39TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2024, 2024, : 1617 - 1625
  • [39] Tree -of-Reasoning Question Decomposition for Complex Question Answering with Large Language Models
    Zhang, Kun
    Zeng, Jiali
    Meng, Fandong
    Wang, Yuanzhuo
    Sun, Shiqi
    Bai, Long
    Shen, Huawei
    Zhou, Jie
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 17, 2024, : 19560 - 19568
  • [40] Textual Pre-Trained Models for Gender Identification Across Community Question-Answering Members
    Schwarzenberg, Pablo
    Figueroa, Alejandro
    [J]. IEEE ACCESS, 2023, 11 : 3983 - 3995