Evaluating the Ability of Large Language Models to Generate Motivational Feedback

被引:1
|
作者
Gaeta, Angelo [1 ]
Orciuoli, Francesco [1 ]
Pascuzzo, Antonella [1 ]
Peduto, Angela [1 ]
机构
[1] Univ Salerno, DISA MIS, Via Giovanni Paolo II 132, I-84084 Fisciano, Sa, Italy
关键词
Large Language Model; Intelligent Tutoring Systems; Motivational feedback;
D O I
10.1007/978-3-031-63028-6_15
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The paper describes and evaluates the use of large language models (LLMs) to provide personalized motivational feedback in the context of Intelligent Tutoring Systems (ITS). Specifically, the main contributions of the present work are the definition of a novel evaluation framework and the early application of such a framework to assess the ability of LLMs to generate textual feedback including motivational features. The experimentation results show that LLMs demonstrate a promising ability to generate motivational feedback and, therefore, a good chance to be integrated as an additional model into the traditional ITS architecture.
引用
收藏
页码:188 / 201
页数:14
相关论文
共 50 条
  • [41] Evaluating Large Language Models on Controlled Generation Tasks
    Sun, Jiao
    Tian, Yufei
    Zhou, Wangchunshu
    Xu, Nan
    Hu, Qian
    Gupta, Rahul
    Wieting, John
    Peng, Nanyun
    Ma, Xuezhe
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 3155 - 3168
  • [42] Baby steps in evaluating the capacities of large language models
    Michael C. Frank
    Nature Reviews Psychology, 2023, 2 : 451 - 452
  • [43] EconNLI: Evaluating Large Language Models on Economics Reasoning
    Guo, Yue
    Yang, Yi
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 982 - 994
  • [44] Evaluating Large Language Models for Tax Law Reasoning
    Cavalcante Presa, Joao Paulo
    Camilo Junior, Celso Goncalves
    Teles de Oliveira, Savio Salvarino
    INTELLIGENT SYSTEMS, BRACIS 2024, PT I, 2025, 15412 : 460 - 474
  • [45] A Chinese Dataset for Evaluating the Safeguards in Large Language Models
    Wang, Yuxia
    Zhai, Zenan
    Li, Haonan
    Han, Xudong
    Lin, Lizhi
    Zhang, Zhenxuan
    Zhao, Jingru
    Nakov, Preslav
    Baldwin, Timothy
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 3106 - 3119
  • [46] Evaluating large language models in analysing classroom dialogue
    Long, Yun
    Luo, Haifeng
    Zhang, Yu
    NPJ SCIENCE OF LEARNING, 2024, 9 (01)
  • [47] Evaluating large language models in theory of mind tasks
    Kosinski, Michal
    PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2024, 121 (45)
  • [48] DebugBench: Evaluating Debugging Capability of Large Language Models
    Tian, Runchu
    Ye, Yining
    Qin, Yujia
    Cong, Xin
    Lin, Yankai
    Pan, Yinxu
    Wu, Yesai
    Hui, Haotian
    Liu, Weichuan
    Liu, Zhiyuan
    Sun, Maosong
    Proceedings of the Annual Meeting of the Association for Computational Linguistics, 2024, : 4173 - 4198
  • [49] Prompting a Large Language Model to Generate Diverse Motivational Messages A Comparison with Human-Written Messages
    Cox, Samuel Rhys
    Abdul, Ashraf
    Ooi, Wei Tsang
    PROCEEDINGS OF THE 11TH CONFERENCE ON HUMAN-AGENT INTERACTION, HAI 2023, 2023, : 378 - 380
  • [50] Exploring Reversal Mathematical Reasoning Ability for Large Language Models
    Guo, Pei
    You, Wangjie
    Li, Juntao
    Yan, Bowen
    Zhang, Min
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 13671 - 13685