Evaluating the Ability of Large Language Models to Generate Motivational Feedback

被引:1
|
作者
Gaeta, Angelo [1 ]
Orciuoli, Francesco [1 ]
Pascuzzo, Antonella [1 ]
Peduto, Angela [1 ]
机构
[1] Univ Salerno, DISA MIS, Via Giovanni Paolo II 132, I-84084 Fisciano, Sa, Italy
关键词
Large Language Model; Intelligent Tutoring Systems; Motivational feedback;
D O I
10.1007/978-3-031-63028-6_15
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The paper describes and evaluates the use of large language models (LLMs) to provide personalized motivational feedback in the context of Intelligent Tutoring Systems (ITS). Specifically, the main contributions of the present work are the definition of a novel evaluation framework and the early application of such a framework to assess the ability of LLMs to generate textual feedback including motivational features. The experimentation results show that LLMs demonstrate a promising ability to generate motivational feedback and, therefore, a good chance to be integrated as an additional model into the traditional ITS architecture.
引用
收藏
页码:188 / 201
页数:14
相关论文
共 50 条
  • [21] Evaluating large language models as agents in the clinic
    Nikita Mehandru
    Brenda Y. Miao
    Eduardo Rodriguez Almaraz
    Madhumita Sushil
    Atul J. Butte
    Ahmed Alaa
    npj Digital Medicine, 7
  • [22] EVALUATING LARGE LANGUAGE MODELS ON THEIR ACCURACY AND COMPLETENESS
    Edalat, Camellia
    Kirupaharan, Nila
    Dalvin, Lauren A.
    Mishra, Kapil
    Marshall, Rayna
    Xu, Hannah
    Francis, Jasmine H.
    Berkenstock, Meghan
    RETINA-THE JOURNAL OF RETINAL AND VITREOUS DISEASES, 2025, 45 (01): : 128 - 132
  • [23] Evaluating Intelligence and Knowledge in Large Language Models
    Bianchini, Francesco
    TOPOI-AN INTERNATIONAL REVIEW OF PHILOSOPHY, 2025, 44 (01): : 163 - 173
  • [24] Evaluating large language models for software testing
    Li, Yihao
    Liu, Pan
    Wang, Haiyang
    Chu, Jie
    Wong, W. Eric
    COMPUTER STANDARDS & INTERFACES, 2025, 93
  • [25] Evaluating large language models as agents in the clinic
    Mehandru, Nikita
    Miao, Brenda Y.
    Almaraz, Eduardo Rodriguez
    Sushil, Madhumita
    Butte, Atul J.
    Alaa, Ahmed
    NPJ DIGITAL MEDICINE, 2024, 7 (01)
  • [26] Evaluating Large Language Models' Ability Using a Psychiatric Screening Tool Based on Metaphor and Sarcasm Scenarios
    Yakura, Hiromu
    JOURNAL OF INTELLIGENCE, 2024, 12 (07)
  • [27] Numeracy for Language Models: Evaluating and Improving their Ability to Predict Numbers
    Spithourakis, Georgios P.
    Riedel, Sebastian
    PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL), VOL 1, 2018, : 2104 - 2115
  • [28] LayoutPrompter: Awaken the Design Ability of Large Language Models
    Lin, Jiawei
    Guo, Jiaqi
    Sun, Shizhao
    Yang, Zijiang James
    Lou, Jian-Guang
    Zhang, Dongmei
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [29] Promises and Pitfalls: Using Large Language Models to Generate Visualization Items
    Cui, Yuan
    Ge, Lily W.
    Ding, Yiren
    Harrison, Lane
    Yang, Fumeng
    Kay, Matthew
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2025, 31 (01) : 1094 - 1104
  • [30] Aligning Large Language Models through Synthetic Feedback
    Kim, Sungdong
    Bae, Sanghwan
    Shin, Jamin
    Kang, Soyoung
    Kwak, Donghyun
    Yoo, Kang Min
    Seo, Minjoon
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 13677 - 13700