MMM: Multi-Stage Multi-Task Learning for Multi-Choice Reading Comprehension

被引:0
|
作者
Jin, Di [1 ]
Gao, Shuyang [2 ]
Kao, Jiun-Yu [2 ]
Chung, Tagyoung [2 ]
Hakkani-tur, Dilek [2 ]
机构
[1] MIT, Comp Sci & Artificial Intelligence Lab, Cambridge, MA 02139 USA
[2] Amazon Alexa AI, Sunnyvale, CA USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine Reading Comprehension (MRC) for question answering (QA), which aims to answer a question given the relevant context passages, is an important way to test the ability of intelligence systems to understand human language. Multiple-Choice QA (MCQA) is one of the most difficult tasks in MRC because it often requires more advanced reading comprehension skills such as logical reasoning, summarization, and arithmetic operations, compared to the extractive counterpart where answers are usually spans of text within given passages. Moreover, most existing MCQA datasets are small in size, making the task even harder. We introduce MMM, a Multi-stage Multi-task learning framework for Multi-choice reading comprehension. Our method involves two sequential stages: coarse-tuning stage using out-of-domain datasets and multi-task learning stage using a larger in-domain dataset to help model generalize better with limited data. Furthermore, we propose a novel multi-step attention network (MAN) as the top-level classifier for this task. We demonstrate MMM significantly advances the state-of-the-art on four representative MCQA datasets.
引用
收藏
页码:8010 / 8017
页数:8
相关论文
共 50 条
  • [1] Multi-Stage Multi-Task Feature Learning
    Gong, Pinghua
    Ye, Jieping
    Zhang, Changshui
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2013, 14 : 2979 - 3010
  • [2] Multi-Stage Multi-Task Learning with Reduced Rank
    Han, Lei
    Zhang, Yu
    [J]. THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2016, : 1638 - 1644
  • [3] A New Multi-choice Reading Comprehension Dataset for Curriculum Learning
    Liang, Yichan
    Li, Jianheng
    Yin, Jian
    [J]. ASIAN CONFERENCE ON MACHINE LEARNING, VOL 101, 2019, 101 : 742 - 757
  • [4] Multi-stage Multi-task feature learning via adaptive threshold
    Fan, Ya-Ru
    Wang, Yilun
    Huang, Ting-Zhu
    [J]. 2016 23RD INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2016, : 1665 - 1670
  • [5] Multi-Grained Evidence Inference for Multi-Choice Reading Comprehension
    Zhao, Yilin
    Zhao, Hai
    Duan, Sufeng
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2023, 31 : 3896 - 3907
  • [6] Multi-task transfer learning for biomedical machine reading comprehension
    Guo, Wenyang
    Du, Yongping
    Zhao, Yiliang
    Ren, Keyan
    [J]. INTERNATIONAL JOURNAL OF DATA MINING AND BIOINFORMATICS, 2020, 23 (03) : 234 - 250
  • [7] CoLISA: Inner Interaction via Contrastive Learning for Multi-choice Reading Comprehension
    Dong, Mengxing
    Zou, Bowei
    Li, Yanling
    Hong, Yu
    [J]. ADVANCES IN INFORMATION RETRIEVAL, ECIR 2023, PT I, 2023, 13980 : 264 - 278
  • [8] Incremental BERT with commonsense representations for multi-choice reading comprehension
    Li, Ronghan
    Wang, Lifang
    Jiang, Zejun
    Liu, Dong
    Zhao, Meng
    Lu, Xinyu
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (21-23) : 32311 - 32333
  • [9] Option Attentive Capsule Network for Multi-choice Reading Comprehension
    Miao, Hang
    Liu, Ruifang
    Gao, Sheng
    [J]. NEURAL INFORMATION PROCESSING (ICONIP 2019), PT III, 2019, 11955 : 306 - 318
  • [10] Leveraging greater relations for improving multi-choice reading comprehension
    Hong Yan
    Lijun Liu
    Xupeng Feng
    Qingsong Huang
    [J]. Neural Computing and Applications, 2022, 34 : 20851 - 20864