An empirical study of pre-trained language models in simple knowledge graph question answering

被引:0
|
作者
Nan Hu
Yike Wu
Guilin Qi
Dehai Min
Jiaoyan Chen
Jeff Z Pan
Zafar Ali
机构
[1] Southeast University,School of Computer Science and Engineering
[2] The University of Manchester,Department of Computer Science
[3] The University of Edinburgh,School of Informatics
来源
World Wide Web | 2023年 / 26卷
关键词
Knowledge graph question answering; Pretrained language models; Accuracy and efficiency; Scalability;
D O I
暂无
中图分类号
学科分类号
摘要
Large-scale pre-trained language models (PLMs) such as BERT have recently achieved great success and become a milestone in natural language processing (NLP). It is now the consensus of the NLP community to adopt PLMs as the backbone for downstream tasks. In recent works on knowledge graph question answering (KGQA), BERT or its variants have become necessary in their KGQA models. However, there is still a lack of comprehensive research and comparison of the performance of different PLMs in KGQA. To this end, we summarize two basic KGQA frameworks based on PLMs without additional neural network modules to compare the performance of nine PLMs in terms of accuracy and efficiency. In addition, we present three benchmarks for larger-scale KGs based on the popular SimpleQuestions benchmark to investigate the scalability of PLMs. We carefully analyze the results of all PLMs-based KGQA basic frameworks on these benchmarks and two other popular datasets, WebQuestionSP and FreebaseQA, and find that knowledge distillation techniques and knowledge enhancement methods in PLMs are promising for KGQA. Furthermore, we test ChatGPT (https://chat.openai.com/), which has drawn a great deal of attention in the NLP community, demonstrating its impressive capabilities and limitations in zero-shot KGQA. We have released the code and benchmarks to promote the use of PLMs on KGQA (https://github.com/aannonymouuss/PLMs-in-Practical-KBQA).
引用
收藏
页码:2855 / 2886
页数:31
相关论文
共 50 条
  • [31] Continual knowledge infusion into pre-trained biomedical language models
    Jha, Kishlay
    Zhang, Aidong
    [J]. BIOINFORMATICS, 2022, 38 (02) : 494 - 502
  • [32] Textual Pre-Trained Models for Age Screening Across Community Question-Answering
    Figueroa, Alejandro
    Timilsina, Mohan
    [J]. IEEE ACCESS, 2024, 12 : 30030 - 30038
  • [33] A Study of Pre-trained Language Models in Natural Language Processing
    Duan, Jiajia
    Zhao, Hui
    Zhou, Qian
    Qiu, Meikang
    Liu, Meiqin
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON SMART CLOUD (SMARTCLOUD 2020), 2020, : 116 - 121
  • [34] VQAttack: Transferable Adversarial Attacks on Visual Question Answering via Pre-trained Models
    Yin, Ziyi
    Ye, Muchao
    Zhang, Tianrong
    Wang, Jiaqi
    Liu, Han
    Chen, Jinghui
    Wang, Ting
    Ma, Fenglong
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 7, 2024, : 6755 - 6763
  • [35] DeFormer: Decomposing Pre-trained Transformers for Faster Question Answering
    Cao, Qingqing
    Trivedi, Harsh
    Balasubramanian, Aruna
    Balasubramanian, Niranjan
    [J]. 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 4487 - 4497
  • [36] An Empirical Study on Hyperparameter Optimization for Fine-Tuning Pre-trained Language Models
    Liu, Xueqing
    Wang, Chi
    [J]. 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021), 2021, : 2286 - 2300
  • [37] Pre-Trained Language Models and Their Applications
    Wang, Haifeng
    Li, Jiwei
    Wu, Hua
    Hovy, Eduard
    Sun, Yu
    [J]. ENGINEERING, 2023, 25 : 51 - 65
  • [38] An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models
    Meade, Nicholas
    Poole-Dayan, Elinor
    Reddy, Siva
    [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 1878 - 1898
  • [39] Exploiting Pre-Trained Language Models for Black-Box Attack against Knowledge Graph Embeddings
    Yang, Guangqian
    Zhang, Lei
    Liu, Yi
    Xie, Hongtao
    Mao, Zhendong
    [J]. ACM Transactions on Knowledge Discovery from Data, 2024, 19 (01)
  • [40] NMT Enhancement based on Knowledge Graph Mining with Pre-trained Language Model
    Yang, Hao
    Qin, Ying
    Deng, Yao
    Wang, Minghan
    [J]. 2020 22ND INTERNATIONAL CONFERENCE ON ADVANCED COMMUNICATION TECHNOLOGY (ICACT): DIGITAL SECURITY GLOBAL AGENDA FOR SAFE SOCIETY!, 2020, : 185 - 189