Hybrid embedding and joint training of stacked encoder for opinion question machine reading comprehension

被引:4
|
作者
Huang, Xiang-zhou [1 ]
Tang, Si-liang [1 ]
Zhang, Yin [1 ]
Wei, Bao-gang [1 ]
机构
[1] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310027, Peoples R China
基金
中国国家自然科学基金;
关键词
Machine reading comprehension; Neural networks; Joint training; Data augmentation; TP391; 1;
D O I
10.1631/FITEE.1900571
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Opinion question machine reading comprehension (MRC) requires a machine to answer questions by analyzing corresponding passages. Compared with traditional MRC tasks where the answer to every question is a segment of text in corresponding passages, opinion question MRC is more challenging because the answer to an opinion question may not appear in corresponding passages but needs to be deduced from multiple sentences. In this study, a novel framework based on neural networks is proposed to address such problems, in which a new hybrid embedding training method combining text features is used. Furthermore, extra attention and output layers which generate auxiliary losses are introduced to jointly train the stacked recurrent neural networks. To deal with imbalance of the dataset, irrelevancy of question and passage is used for data augmentation. Experimental results show that the proposed method achieves state-of-the-art performance. We are the biweekly champion in the opinion question MRC task in Artificial Intelligence Challenger 2018 (AIC2018).
引用
收藏
页码:1346 / 1355
页数:10
相关论文
共 45 条
  • [1] Hybrid embedding and joint training of stacked encoder for opinion question machine reading comprehension
    Xiang-zhou Huang
    Si-liang Tang
    Yin Zhang
    Bao-gang Wei
    Frontiers of Information Technology & Electronic Engineering, 2020, 21 : 1346 - 1355
  • [2] Multi-task joint training model for machine reading comprehension
    Li, Fangfang
    Shan, Youran
    Mao, Xingliang
    Ren, Xingkai
    Liu, Xiyao
    Zhang, Shichao
    NEUROCOMPUTING, 2022, 488 : 66 - 77
  • [3] Pre-reading Activity over Question for Machine Reading Comprehension
    Yuan, Chenchen
    Liu, Kaiyang
    Zhang, Xulu
    2022 IEEE 34TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2022, : 1411 - 1418
  • [4] Embedding Reading Comprehension Training in Content-Area Instruction
    Williams, Joanna P.
    Stafford, K. Brooke
    Lauer, Kristen D.
    Hall, Kendra M.
    Pollini, Simonne
    JOURNAL OF EDUCATIONAL PSYCHOLOGY, 2009, 101 (01) : 1 - 20
  • [5] Capsule Networks for Chinese Opinion Questions Machine Reading Comprehension
    Ding, Longxiang
    Li, Zhoujun
    Wang, Boyang
    He, Yueying
    CHINESE COMPUTATIONAL LINGUISTICS, CCL 2019, 2019, 11856 : 521 - 532
  • [6] JaQuAD: Japanese question answering dataset for machine reading comprehension
    So, ByungHoon
    Byun, Kyuhong
    Kang, Kyungwon
    Cho, Seongjin
    arXiv, 2022,
  • [7] EFFECTS OF QUESTION-GENERATION TRAINING ON READING-COMPREHENSION
    DAVEY, B
    MCBRIDE, S
    JOURNAL OF EDUCATIONAL PSYCHOLOGY, 1986, 78 (04) : 256 - 262
  • [8] A Robust Adversarial Training Approach to Machine Reading Comprehension
    Liu, Kai
    Liu, Xin
    Yang, An
    Liu, Jing
    Su, Jinsong
    Li, Sujian
    She, Qiaoqiao
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 8392 - 8400
  • [9] Adversarial Training for Machine Reading Comprehension with Virtual Embeddings
    Yang, Ziqing
    Cui, Yiming
    Si, Chenglei
    Che, Wanxiang
    Liu, Ting
    Wang, Shijin
    Hu, Guoping
    10TH CONFERENCE ON LEXICAL AND COMPUTATIONAL SEMANTICS (SEM 2021), 2021, : 308 - 313
  • [10] Cooperative Self-training of Machine Reading Comprehension
    Luo, Hongyin
    Li, Shang-Wen
    Gao, Mingye
    Yu, Seunghak
    Glass, James
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 244 - 257