An empirical study of pre-trained language models in simple knowledge graph question answering

被引:0
|
作者
Nan Hu
Yike Wu
Guilin Qi
Dehai Min
Jiaoyan Chen
Jeff Z Pan
Zafar Ali
机构
[1] Southeast University,School of Computer Science and Engineering
[2] The University of Manchester,Department of Computer Science
[3] The University of Edinburgh,School of Informatics
来源
World Wide Web | 2023年 / 26卷
关键词
Knowledge graph question answering; Pretrained language models; Accuracy and efficiency; Scalability;
D O I
暂无
中图分类号
学科分类号
摘要
Large-scale pre-trained language models (PLMs) such as BERT have recently achieved great success and become a milestone in natural language processing (NLP). It is now the consensus of the NLP community to adopt PLMs as the backbone for downstream tasks. In recent works on knowledge graph question answering (KGQA), BERT or its variants have become necessary in their KGQA models. However, there is still a lack of comprehensive research and comparison of the performance of different PLMs in KGQA. To this end, we summarize two basic KGQA frameworks based on PLMs without additional neural network modules to compare the performance of nine PLMs in terms of accuracy and efficiency. In addition, we present three benchmarks for larger-scale KGs based on the popular SimpleQuestions benchmark to investigate the scalability of PLMs. We carefully analyze the results of all PLMs-based KGQA basic frameworks on these benchmarks and two other popular datasets, WebQuestionSP and FreebaseQA, and find that knowledge distillation techniques and knowledge enhancement methods in PLMs are promising for KGQA. Furthermore, we test ChatGPT (https://chat.openai.com/), which has drawn a great deal of attention in the NLP community, demonstrating its impressive capabilities and limitations in zero-shot KGQA. We have released the code and benchmarks to promote the use of PLMs on KGQA (https://github.com/aannonymouuss/PLMs-in-Practical-KBQA).
引用
收藏
页码:2855 / 2886
页数:31
相关论文
共 50 条
  • [1] An empirical study of pre-trained language models in simple knowledge graph question answering
    Hu, Nan
    Wu, Yike
    Qi, Guilin
    Min, Dehai
    Chen, Jiaoyan
    Pan, Jeff Z.
    Ali, Zafar
    [J]. WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2023, 26 (05): : 2855 - 2886
  • [2] ReLMKG: reasoning with pre-trained language models and knowledge graphs for complex question answering
    Xing Cao
    Yun Liu
    [J]. Applied Intelligence, 2023, 53 : 12032 - 12046
  • [3] ReLMKG: reasoning with pre-trained language models and knowledge graphs for complex question answering
    Cao, Xing
    Liu, Yun
    [J]. APPLIED INTELLIGENCE, 2023, 53 (10) : 12032 - 12046
  • [4] SimKGC: Simple Contrastive Knowledge Graph Completion with Pre-trained Language Models
    Wang, Liang
    Zhao, Wei
    Wei, Zhuoyu
    Liu, Jingming
    [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 4281 - 4294
  • [5] K-AID: Enhancing Pre-trained Language Models with Domain Knowledge for Question Answering
    Sun, Fu
    Li, Feng-Lin
    Wang, Ruize
    Chen, Qianglong
    Cheng, Xingyi
    Zhang, Ji
    [J]. PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 4125 - 4134
  • [6] Pre-trained Language Model for Biomedical Question Answering
    Yoon, Wonjin
    Lee, Jinhyuk
    Kim, Donghyeon
    Jeong, Minbyul
    Kang, Jaewoo
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2019, PT II, 2020, 1168 : 727 - 740
  • [7] Improving Visual Question Answering with Pre-trained Language Modeling
    Wu, Yue
    Gao, Huiyi
    Chen, Lei
    [J]. FIFTH INTERNATIONAL WORKSHOP ON PATTERN RECOGNITION, 2020, 11526
  • [8] Large-Scale Relation Learning for Question Answering over Knowledge Bases with Pre-trained Language Models
    Yam, Yuanmeng
    Li, Rumei
    Wang, Sirui
    Zhang, Hongzhi
    Zan, Daoguang
    Zhang, Fuzheng
    Wu, Wei
    Xu, Weiran
    [J]. 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 3653 - 3660
  • [9] UniRaG: Unification, Retrieval, and Generation for Multimodal Question Answering With Pre-Trained Language Models
    Lim, Qi Zhi
    Lee, Chin Poo
    Lim, Kian Ming
    Samingan, Ahmad Kamsani
    [J]. IEEE ACCESS, 2024, 12 : 71505 - 71519
  • [10] Explanation Graph Generation via Pre-trained Language Models: An Empirical Study with Contrastive Learning
    Saha, Swarnadeep
    Yadav, Prateek
    Bansal, Mohit
    [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 1190 - 1208