End-to-End generation of Multiple-Choice questions using Text-to-Text transfer Transformer models

被引:36
|
作者
Rodriguez-Torrealba, Ricardo [1 ]
Garcia-Lopez, Eva [1 ]
Garcia-Cabot, Antonio [1 ]
机构
[1] Univ Alcala, Dept Ciencias Comp, Alcala De Henares 28801, Madrid, Spain
关键词
Multiple-Choice Question Generation; Distractor Generation; Question Answering; Question Generation; Reading Comprehension;
D O I
10.1016/j.eswa.2022.118258
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The increasing worldwide adoption of e-learning tools and widespread increase of online education has brought multiple challenges, including the ability of generating assessments at the scale and speed demanded by this environment. In this sense, recent advances in language models and architectures like the Transformer, provide opportunities to explore how to assist educators in these tasks. This study focuses on using neural language models for the generation of questionnaires composed of multiple-choice questions, based on English Wikipedia articles as input. The problem is addressed using three dimensions: Question Generation (QG), Question Answering (QA), and Distractor Generation (DG). A processing pipeline based on pre-trained T5 language models is designed and a REST API is implemented for its use. The DG task is defined using a Text-To-Text format and a T5 model is fine-tuned on the DG-RACE dataset, showing an improvement to ROUGE-L metric compared to the reference for the dataset. A discussion about the lack of an adequate metric for DG is presented and the cosine similarity using word embeddings is considered as a complement. Questionnaires are evaluated by human ex-perts reporting that questions and options are generally well formed, however, they are more oriented to measuring retention than comprehension.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Distractor Generation Through Text-to-Text Transformer Models
    de-Fitero-Dominguez, David
    Garcia-Lopez, Eva
    Garcia-Cabot, Antonio
    del-Hoyo-Gabaldon, Jesus-Angel
    Moreno-Cediel, Antonio
    IEEE ACCESS, 2024, 12 : 25580 - 25589
  • [2] Bootstrap an End-to-end ASR System by Multilingual Training, Transfer Learning, Text-to-text Mapping and Synthetic Audio
    Giollo, Manuel
    Gunceler, Deniz
    Liu, Yulan
    Willett, Daniel
    INTERSPEECH 2021, 2021, : 2416 - 2420
  • [3] End-to-End Video Text Spotting with Transformer
    Wu, Weijia
    Cai, Yuanqiang
    Shen, Chunhua
    Zhang, Debing
    Fu, Ying
    Zhou, Hong
    Luo, Ping
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (09) : 4019 - 4035
  • [4] Enhance Text-to-Text Transfer Transformer with Generated Questions for Thai Question Answering
    Phakmongkol, Puri
    Vateekul, Peerapon
    APPLIED SCIENCES-BASEL, 2021, 11 (21):
  • [5] Text-to-Text Transfer Transformer Phrasing Model Using Enriched Text Input
    Rezackova, Marketa
    Matousek, Jindrich
    TEXT, SPEECH, AND DIALOGUE (TSD 2022), 2022, 13502 : 389 - 400
  • [6] End-to-End Differentiable GANs for Text Generation
    Kumar, Sachin
    Tsvetkov, Yulia
    NEURIPS WORKSHOPS, 2020, 2020, 137 : 118 - 128
  • [7] Speech-and-Text Transformer: Exploiting Unpaired Text for End-to-End Speech Recognition
    Wang, Qinyi
    Zhou, Xinyuan
    Li, Haizhou
    APSIPA TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING, 2023, 12 (01)
  • [8] Transformer-based end-to-end scene text recognition
    Zhu, Xinghao
    Zhang, Zhi
    PROCEEDINGS OF THE 2021 IEEE 16TH CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA 2021), 2021, : 1691 - 1695
  • [9] Leveraging Text Data Using Hybrid Transformer-LSTM Based End-to-End ASR in Transfer Learning
    Zeng, Zhiping
    Pham, Van Tung
    Xu, Haihua
    Khassanov, Yerbolat
    Chng, Eng Siong
    Ni, Chongjia
    Ma, Bin
    2021 12TH INTERNATIONAL SYMPOSIUM ON CHINESE SPOKEN LANGUAGE PROCESSING (ISCSLP), 2021,
  • [10] HaT5: Hate Language Identification using Text-to-Text Transfer Transformer
    Sabry, Sana Sabah
    Adewumi, Tosin
    Abid, Nosheen
    Kovacs, Gyorgy
    Liwicki, Foteini
    Liwicki, Marcus
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,