Distractor Generation Through Text-to-Text Transformer Models

被引:1
|
作者
de-Fitero-Dominguez, David [1 ]
Garcia-Lopez, Eva [1 ]
Garcia-Cabot, Antonio [1 ]
del-Hoyo-Gabaldon, Jesus-Angel [1 ]
Moreno-Cediel, Antonio [1 ]
机构
[1] Univ Alcala, Dept Ciencias Comp, Edificio Politecn, Alcala De Henares 28871, Madrid, Spain
关键词
Artificial intelligence; natural languages; natural language processing; computer applications; educational technology; MULTIPLE; CORRECT;
D O I
10.1109/ACCESS.2024.3361673
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, transformer language models have made a significant impact on automatic text generation. This study focuses on the task of distractor generation in Spanish using a fine-tuned multilingual text-to-text model, namely mT5. Our method outperformed established baselines based on LSTM networks, confirming the effectiveness of Transformer architectures in such NLP tasks. While comparisons with other Transformer-based solutions yielded diverse outcomes based on the metric of choice, our method notably achieved superior results on the ROUGE metric compared to the GPT-2 approach. Although traditional evaluation metrics such as BLEU and ROUGE are commonly used, this paper argues for more context-sensitive metrics given the inherent variability in acceptable distractor generation results. Among the contributions of this research is a comprehensive comparison with other methods, an examination of the potential drawbacks of multilingual models, and the introduction of alternative evaluation metrics. Future research directions, derived from our findings and a review of related works are also suggested, with a particular emphasis on leveraging other language models and Transformer architectures.
引用
收藏
页码:25580 / 25589
页数:10
相关论文
共 50 条
  • [41] Transformer models for enhancing AttnGAN based text to image generation
    Naveen, S.
    Kiran, M. S. S. Ram
    Indupriya, M.
    Manikanta, T. V.
    Sudeep, P. V.
    IMAGE AND VISION COMPUTING, 2021, 115
  • [42] Evaluation of Transfer Learning for Polish with a Text-to-Text Model
    Chrabrowa, Aleksandra
    Dragan, Lukasz
    Grzegorczyk, Karol
    Kajtoch, Dariusz
    Koszowski, Mikolaj
    Mroczkowski, Robert
    Rybak, Piotr
    LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 4374 - 4394
  • [43] Distractor Generation based on Text2Text Language Models with Pseudo Kullback-Leibler Divergence Regulation
    Wang, Hui-Juan
    Hsieh, Kai-Yu
    Yu, Han-Cheng
    Tsou, Jui-Ching
    Shih, Yu-An
    Huang, Chen-Hua
    Fan, Yao-Chung
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 12477 - 12491
  • [44] Product Titles-to-Attributes As a Text-to-Text Task
    Fuchs, Gilad
    Acriche, Yoni
    PROCEEDINGS OF THE 5TH WORKSHOP ON E-COMMERCE AND NLP (ECNLP 5), 2022, : 91 - 98
  • [45] A Text-to-Text Model for Multilingual Offensive Language Identification
    Ranasinghe, Tharindu
    Zampieri, Marcos
    13TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING AND THE 3RD CONFERENCE OF THE ASIA-PACIFIC CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, IJCNLP-AACL 2023, 2023, : 375 - 384
  • [46] LASP: Text-to-Text Optimization for Language-Aware Soft Prompting of Vision & Language Models
    Bulat, Adrian
    Tzimiropoulos, Georgios
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 23232 - 23241
  • [47] Text-to-text generative approach for enhanced complex word identification
    Sliwiak, Patrycja
    Shah, Syed Afaq Ali
    NEUROCOMPUTING, 2024, 610
  • [48] TESS: Text-to-Text Self-Conditioned Simplex Diffusion
    Mahabadi, Rabeeh Karimi
    Ivison, Hamish
    Tae, Jaesung
    Henderson, James
    Beltagy, Iz
    Peters, Matthew E.
    Cohan, Arman
    PROCEEDINGS OF THE 18TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 2347 - 2361
  • [49] Text-to-text machine translation using the RECONTRA connectionist model
    Castaño, MA
    Casacuberta, F
    ENGINEERING APPLICATIONS OF BIO-INSPIRED ARTIFICIAL NEURAL NETWORKS, VOL II, 1999, 1607 : 683 - 692
  • [50] T5G2P: Text-to-Text Transfer Transformer Based Grapheme-to-Phoneme Conversion
    Rezackova, Marketa
    Tihelka, Daniel
    Matousek, Jindrich
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2024, 32 : 3466 - 3476