Towards Knowledge Enhanced Language Model for Machine Reading Comprehension

被引:10
|
作者
Gong, Peizhu [1 ]
Liu, Jin [1 ]
Yang, Yihe [1 ]
He, Huihua [2 ]
机构
[1] Shanghai Maritime Univ, Coll Informat Engn, Shanghai 201306, Peoples R China
[2] Shanghai Normal Univ, Coll Educ, Shanghai 200234, Peoples R China
基金
中国国家自然科学基金;
关键词
Task analysis; Semantics; Knowledge engineering; Knowledge based systems; Bit error rate; Encoding; Syntactics; Machine reading comprehension; knowledge graph embedding; BERT; capsule network;
D O I
10.1109/ACCESS.2020.3044308
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine reading comprehension is a crucial and challenging task in natural language processing (NLP). Recently, knowledge graph (KG) embedding has gained massive attention as it can effectively provide side information for downstream tasks. However, most previous knowledge-based models do not take into account the structural characteristics of the triples in KGs, and only convert them into vector representations for direct accumulation, leading to deficiencies in knowledge extraction and knowledge fusion. In order to alleviate this problem, we propose a novel deep model KCF-NET, which incorporates knowledge graph representations with context as the basis for predicting answers by leveraging capsule network to encode the intrinsic spatial relationship in triples of KG. In KCF-NET, we fine-tune BERT, a highly performance contextual language representation model, to capture complex linguistic phenomena. Besides, a novel fusion structure based on multi-head attention mechanism is designed to balance the weight of knowledge and context. To evaluate the knowledge expression and reading comprehension ability of our model, we conducted extensive experiments on multiple public datasets such as WN11, FB13, SemEval-2010 Task 8 and SQuAD. Experimental results show that KCF-NET achieves state-of-the-art results in both link prediction and MRC tasks with negligible parameter increase compared to BERT-Base, and gets competitive results in triple classification task with significantly reduced model size.
引用
收藏
页码:224837 / 224851
页数:15
相关论文
共 50 条
  • [1] Towards Medical Machine Reading Comprehension with Structural Knowledge and Plain Text
    Li, Dongfang
    Hu, Baotian
    Chen, Qingcai
    Peng, Weihua
    Wang, Anqi
    [J]. PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 1427 - 1438
  • [2] Machine Reading Comprehension with Rich Knowledge
    He, Jun
    Peng, Li
    Zhang, Yinghui
    Sun, Bo
    Xiao, Rong
    Xiao, Yongkang
    [J]. INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2022, 36 (05)
  • [3] LANGUAGE PROCESSING AND THE READING OF LITERATURE - TOWARDS A MODEL OF COMPREHENSION - DILLON,GL
    TAYLOR, T
    [J]. REVIEW OF ENGLISH STUDIES, 1980, 31 (123): : 322 - 324
  • [4] Enhancing Pre-Trained Language Representations with Rich Knowledge for Machine Reading Comprehension
    Yang, An
    Wang, Quan
    Liu, Jing
    Liu, Kai
    Lyu, Yajuan
    Wu, Hua
    She, Qiaoqiao
    Li, Sujian
    [J]. 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 2346 - 2357
  • [5] Methods and Trends of Machine Reading Comprehension in the Arabic Language
    Alkhatnai, Mubarak
    Amjad, Hamza Imam
    Amjad, Maaz
    Gelbukh, Alexander
    [J]. COMPUTACION Y SISTEMAS, 2020, 24 (04): : 1607 - 1615
  • [6] Toward natural language understanding by machine reading comprehension
    Nishida, Kyosuke
    Saito, Itsumi
    Otsuka, Atsushi
    Nishida, Kosuke
    Nomoto, Narichika
    Asano, Hisako
    [J]. NTT Technical Review, 2019, 17 (09): : 9 - 14
  • [7] Review of Conversational Machine Reading Comprehension for Knowledge Graph
    Hu, Juan
    Xi, Xuefeng
    Cui, Zhiming
    [J]. Computer Engineering and Applications, 2024, 60 (03) : 17 - 28
  • [8] Improving Machine Reading Comprehension with Contextualized Commonsense Knowledge
    Sun, Kai
    Yu, Dian
    Chen, Jianshu
    Yu, Dong
    Cardie, Claire
    [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 8736 - 8747
  • [9] Interpretable modular knowledge reasoning for machine reading comprehension
    Mucheng Ren
    Heyan Huang
    Yang Gao
    [J]. Neural Computing and Applications, 2022, 34 : 9901 - 9918
  • [10] Interpretable modular knowledge reasoning for machine reading comprehension
    Ren, Mucheng
    Huang, Heyan
    Gao, Yang
    [J]. NEURAL COMPUTING & APPLICATIONS, 2022, 34 (12): : 9901 - 9918