End-to-End generation of Multiple-Choice questions using Text-to-Text transfer Transformer models

被引:36
|
作者
Rodriguez-Torrealba, Ricardo [1 ]
Garcia-Lopez, Eva [1 ]
Garcia-Cabot, Antonio [1 ]
机构
[1] Univ Alcala, Dept Ciencias Comp, Alcala De Henares 28801, Madrid, Spain
关键词
Multiple-Choice Question Generation; Distractor Generation; Question Answering; Question Generation; Reading Comprehension;
D O I
10.1016/j.eswa.2022.118258
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The increasing worldwide adoption of e-learning tools and widespread increase of online education has brought multiple challenges, including the ability of generating assessments at the scale and speed demanded by this environment. In this sense, recent advances in language models and architectures like the Transformer, provide opportunities to explore how to assist educators in these tasks. This study focuses on using neural language models for the generation of questionnaires composed of multiple-choice questions, based on English Wikipedia articles as input. The problem is addressed using three dimensions: Question Generation (QG), Question Answering (QA), and Distractor Generation (DG). A processing pipeline based on pre-trained T5 language models is designed and a REST API is implemented for its use. The DG task is defined using a Text-To-Text format and a T5 model is fine-tuned on the DG-RACE dataset, showing an improvement to ROUGE-L metric compared to the reference for the dataset. A discussion about the lack of an adequate metric for DG is presented and the cosine similarity using word embeddings is considered as a complement. Questionnaires are evaluated by human ex-perts reporting that questions and options are generally well formed, however, they are more oriented to measuring retention than comprehension.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Recognizing Multiple Text Sequences from an Image by Pure End-to-End Learning
    Xu, Zhenlong
    Zhou, Shuigeng
    Bai, Fan
    Cheng, Zhanzhan
    Niu, Yi
    Pu, Shiliang
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 7058 - 7065
  • [22] Myanmar Text-to-Speech Synthesis Using End-to-End Model
    Qin, Qinglai
    Yang, Jian
    Li, Peiying
    2020 4TH INTERNATIONAL CONFERENCE ON NATURAL LANGUAGE PROCESSING AND INFORMATION RETRIEVAL, NLPIR 2020, 2020, : 6 - 11
  • [23] How Do Seq2Seq Models Perform on End-to-End Data-to-Text Generation?
    Yin, Xunjian
    Wan, Xiaojun
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 7701 - 7710
  • [24] PRE-TRAINING TRANSFORMER DECODER FOR END-TO-END ASR MODEL WITH UNPAIRED TEXT DATA
    Gao, Changfeng
    Cheng, Gaofeng
    Yang, Runyan
    Zhu, Han
    Zhang, Pengyuan
    Yan, Yonghong
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 6543 - 6547
  • [25] VX2TEXT: End-to-End Learning of Video-Based Text Generation From Multimodal Inputs
    Lin, Xudong
    Bertasius, Gedas
    Wang, Jue
    Chang, Shih-Fu
    Parikh, Devi
    Torresani, Lorenzo
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 7001 - 7011
  • [26] Ensemble-NQG-T5: Ensemble Neural Question Generation Model Based on Text-to-Text Transfer Transformer
    Hwang, Myeong-Ha
    Shin, Jikang
    Seo, Hojin
    Im, Jeong-Seon
    Cho, Hee
    Lee, Chun-Kwon
    APPLIED SCIENCES-BASEL, 2023, 13 (02):
  • [27] Neural data-to-text generation: A comparison between pipeline and end-to-end architectures
    Ferreira, Thiago Castro
    van der Lee, Chris
    van Miltenburg, Emiel
    Krahmer, Emiel
    2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019): PROCEEDINGS OF THE CONFERENCE, 2019, : 552 - 562
  • [28] End-to-End Historical Handwritten Ethiopic Text Recognition Using Deep Learning
    Malhotra, Ruchika
    Addis, Maru Tesfaye
    IEEE ACCESS, 2023, 11 : 99535 - 99545
  • [29] An end-to-end handwritten text recognition method using residual attention networks
    Wang Y.-T.
    Zheng H.
    Chang H.-Y.
    Li S.
    Kongzhi yu Juece/Control and Decision, 2023, 38 (07): : 1825 - 1834
  • [30] End-to-End Handwritten Paragraph Text Recognition Using a Vertical Attention Network
    Coquenet, Denis
    Chatelain, Clement
    Paquet, Thierry
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (01) : 508 - 524