Towards Robust Neural Machine Reading Comprehension via Question Paraphrases

被引:0
|
作者
Li, Ying [1 ]
Li, Hongyu [2 ]
Liu, Jing [2 ]
机构
[1] Univ Sci & Technol China, Natl Engn Lab Brain Inspired Intelligence Technol, Hefei, Peoples R China
[2] Baidu Inc, Beijing, Peoples R China
关键词
machine reading comprehension; oversensitivity; question paraphrases;
D O I
10.1109/ialp48816.2019.9037673
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we focus on addressing the oversensitivity issue of neural machine reading comprehension (MRC) models. By oversensitivity, we mean that the neural MRC models give different answers to question paraphrases that are semantically equivalent. To address this issue, we first create a large-scale Chinese MRC dataset with high quality question paraphrases generated by a toolkit used in Baidu Search. Then, we quantitively analyze the oversensitivity issue of the neural MRC models on the dataset. Intuitively, if two questions are paraphrases of each other, a robust model should give the same predictions. Based on this intuition, we propose a regularized BERT-based model to encourage the model give the same predictions to similar inputs by lever-aging high-quality question paraphrases. The experimental results show that our approaches can significantly improve the robustness of a strong BERT-based MRC model and achieve improvements over the BERT-based model in terms of held-out accuracy. Specifically, the different prediction ratio (DPR) for question paraphrases of the proposed model decreases more than 10%.
引用
收藏
页码:290 / 295
页数:6
相关论文
共 50 条
  • [1] Exploring Machine Reading Comprehension for Continuous Questions via Subsequent Question Completion
    Yang, Kaijing
    Zhang, Xin
    Chen, Dongmei
    IEEE ACCESS, 2021, 9 : 12622 - 12634
  • [2] Robust Domain Adaptation for Machine Reading Comprehension
    Jiang, Liang
    Huang, Zhenyu
    Liu, Jia
    Wen, Zujie
    Peng, Xi
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 7, 2023, : 8060 - 8069
  • [3] Pre-reading Activity over Question for Machine Reading Comprehension
    Yuan, Chenchen
    Liu, Kaiyang
    Zhang, Xulu
    2022 IEEE 34TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2022, : 1411 - 1418
  • [4] Learning to Ask: Neural Question Generation for Reading Comprehension
    Du, Xinya
    Shao, Junru
    Cardie, Claire
    PROCEEDINGS OF THE 55TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2017), VOL 1, 2017, : 1342 - 1352
  • [5] Neural Machine Reading Comprehension: Methods and Trends
    Liu, Shanshan
    Zhang, Xin
    Zhang, Sheng
    Wang, Hui
    Zhang, Weiming
    APPLIED SCIENCES-BASEL, 2019, 9 (18):
  • [6] A Robust Adversarial Training Approach to Machine Reading Comprehension
    Liu, Kai
    Liu, Xin
    Yang, An
    Liu, Jing
    Su, Jinsong
    Li, Sujian
    She, Qiaoqiao
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 8392 - 8400
  • [7] Named Entity Filters for Robust Machine Reading Comprehension
    Peng, Yu-Yan
    Hsu, Jane Yung-jen
    2018 CONFERENCE ON TECHNOLOGIES AND APPLICATIONS OF ARTIFICIAL INTELLIGENCE (TAAI), 2018, : 181 - 184
  • [8] JaQuAD: Japanese question answering dataset for machine reading comprehension
    So, ByungHoon
    Byun, Kyuhong
    Kang, Kyungwon
    Cho, Seongjin
    arXiv, 2022,
  • [9] Survey of Machine Reading Comprehension Based on Neural Network
    Gu Y.-J.
    Gui X.-L.
    Li D.-F.
    Shen Y.
    Liao D.
    Ruan Jian Xue Bao/Journal of Software, 2020, 31 (07): : 2095 - 2126
  • [10] HCT: Chinese Medical Machine Reading Comprehension Question-Answering via Hierarchically Collaborative Transformer
    Wang, Meiling
    He, Xiaohai
    Liu, Luping
    Fang, Qingmao
    Zhang, Mei
    Chen, Honggang
    Liu, Yan
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2024, 28 (05) : 3055 - 3066