Hybrid embedding and joint training of stacked encoder for opinion question machine reading comprehension

被引:4
|
作者
Huang, Xiang-zhou [1 ]
Tang, Si-liang [1 ]
Zhang, Yin [1 ]
Wei, Bao-gang [1 ]
机构
[1] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310027, Peoples R China
基金
中国国家自然科学基金;
关键词
Machine reading comprehension; Neural networks; Joint training; Data augmentation; TP391; 1;
D O I
10.1631/FITEE.1900571
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Opinion question machine reading comprehension (MRC) requires a machine to answer questions by analyzing corresponding passages. Compared with traditional MRC tasks where the answer to every question is a segment of text in corresponding passages, opinion question MRC is more challenging because the answer to an opinion question may not appear in corresponding passages but needs to be deduced from multiple sentences. In this study, a novel framework based on neural networks is proposed to address such problems, in which a new hybrid embedding training method combining text features is used. Furthermore, extra attention and output layers which generate auxiliary losses are introduced to jointly train the stacked recurrent neural networks. To deal with imbalance of the dataset, irrelevancy of question and passage is used for data augmentation. Experimental results show that the proposed method achieves state-of-the-art performance. We are the biweekly champion in the opinion question MRC task in Artificial Intelligence Challenger 2018 (AIC2018).
引用
收藏
页码:1346 / 1355
页数:10
相关论文
共 45 条
  • [21] Exploring Machine Reading Comprehension for Continuous Questions via Subsequent Question Completion
    Yang, Kaijing
    Zhang, Xin
    Chen, Dongmei
    IEEE ACCESS, 2021, 9 : 12622 - 12634
  • [22] DAQAS: Deep Arabic Question Answering System based on duplicate question detection and machine reading comprehension
    Alami, Hamza
    Mahdaouy, Abdelkader El
    Benlahbib, Abdessamad
    En-Nahnahi, Noureddine
    Berrada, Ismail
    Ouatik, Said El Alaoui
    JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2023, 35 (08)
  • [23] A Densely Connected Encoder Stack Approach for Multi-type Legal Machine Reading Comprehension
    Nai, Peiran
    Li, Lin
    Tao, Xiaohui
    WEB INFORMATION SYSTEMS ENGINEERING, WISE 2020, PT II, 2020, 12343 : 167 - 181
  • [24] Question answering model based on machine reading comprehension with knowledge enhancement and answer verification
    Yang, Ziming
    Sun, Yuxia
    Kuang, Qingxuan
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2022, 34 (12):
  • [25] Improving Machine Reading Comprehension through A Simple Masked-Training Scheme
    Yao, Xun
    Ma, Junlong
    Hu, Xinrong
    Yang, Jie
    Li, Yuan-Fang
    13TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING AND THE 3RD CONFERENCE OF THE ASIA-PACIFIC CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, IJCNLP-AACL 2023, 2023, : 222 - 232
  • [26] A Self-Training Method for Machine Reading Comprehension with Soft Evidence Extraction
    Niu, Yilin
    Jiao, Fangkai
    Zhou, Mantong
    Yao, Ting
    Xu, Jingfang
    Huang, Minlie
    58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 3916 - 3927
  • [27] Combining permuted language model and adversarial training for Chinese machine reading comprehension
    Liu J.
    Chu X.
    Wang J.
    Wang M.
    Wang Y.
    Journal of Intelligent and Fuzzy Systems, 2024, 46 (04): : 10059 - 10073
  • [28] Machine Reading Comprehension Framework Based on Self-Training for Domain Adaptation
    Lee, Hyeon-Gu
    Jang, Youngjin
    Kim, Harksoo
    IEEE ACCESS, 2021, 9 : 21279 - 21285
  • [29] Machine Reading Comprehension Framework Based on Self-Training for Domain Adaptation
    Lee, Hyeon-Gu
    Jang, Youngjin
    Kim, Harksoo
    IEEE Access, 2021, 9 : 21279 - 21285
  • [30] NER-MQMRC: Formulating Named Entity Recognition as Multi Question Machine Reading Comprehension
    Shrimal, Anubhav
    Jain, Avi
    Mehta, Kartik
    Yenigalla, Promod
    2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, NAACL-HLT 2022, 2022, : 230 - 238