Formative Feedback on Student-Authored Summaries in Intelligent Textbooks Using Large Language Models

被引:0
|
作者
Morris, Wesley [1 ]
Crossley, Scott [1 ]
Holmes, Langdon [1 ]
Ou, Chaohua [2 ]
Dascalu, Mihai [3 ]
Mcnamara, Danielle [4 ]
机构
[1] Vanderbilt Univ, Nashville, TN 37235 USA
[2] Georgia Inst Technol, Atlanta, GA USA
[3] Univ Politehn Bucuresti, Bucharest, Romania
[4] Arizona State Univ, Tempe, AZ USA
基金
美国国家科学基金会;
关键词
Intelligent textbooks; Large language models; Automated summary scoring; Transformers;
D O I
10.1007/s40593-024-00395-0
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
As intelligent textbooks become more ubiquitous in classrooms and educational settings, the need to make them more interactive arises. An alternative is to ask students to generate knowledge in response to textbook content and provide feedback about the produced knowledge. This study develops Natural Language Processing models to automatically provide feedback to students about the quality of summaries written at the end of intelligent textbook sections. The study builds on the work of Botarleanu et al. (2022), who used a Longformer Large Language Model (LLM) to develop a summary grading model. Their model explained around 55% of holistic summary score variance as assigned by human raters. This study uses a principal component analysis to distill summary scores from an analytic rubric into two principal components - content and wording. This study uses two encoder-only classification large language models finetuned from Longformer on the summaries and the source texts using these principal components explained 82% and 70% of the score variance for content and wording, respectively. On a dataset of summaries collected on the crowd-sourcing site Prolific, the content model was shown to be robust although the accuracy of the wording model was reduced compared to the training set. The developed models are freely available on HuggingFace and will allow formative feedback to users of intelligent textbooks to assess reading comprehension through summarization in real time. The models can also be used for other summarization applications in learning systems.
引用
收藏
页数:22
相关论文
共 50 条
  • [31] Evaluation of Large Language Models on Code Obfuscation (Student Abstract)
    Swindle, Adrian
    McNealy, Derrick
    Krishnan, Giri
    Ramyaa, Ramyaa
    THIRTY-EIGTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 21, 2024, : 23664 - 23666
  • [32] Large language models (LLM) and ChatGPT: a medical student perspective
    Arosh S. Perera Molligoda Arachchige
    European Journal of Nuclear Medicine and Molecular Imaging, 2023, 50 : 2248 - 2249
  • [33] Large language models (LLM) and ChatGPT: a medical student perspective
    Arachchige, Arosh S. Perera Molligoda S.
    EUROPEAN JOURNAL OF NUCLEAR MEDICINE AND MOLECULAR IMAGING, 2023, 50 (08) : 2248 - 2249
  • [34] Large Language Models as Planning Domain Generators (Student Abstract)
    Oswald, James
    Srinivas, Kavitha
    Kokel, Harsha
    Lee, Junkyu
    Katz, Michael
    Sohrabi, Shirin
    THIRTY-EIGTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 21, 2024, : 23604 - 23605
  • [35] Facilitating skill development using student-directed activities and personalized formative feedback
    Valdez, Connie
    Shea, Leticia
    Knutsen, Randy
    Hoody, Dorie
    CURRENTS IN PHARMACY TEACHING AND LEARNING, 2014, 6 (06) : 826 - 833
  • [36] Formative Information Using Student Growth Percentiles for the Quantification of English Language Learners' Progress in Language Acquisition
    Taherbhai, Husein
    Seo, Daeryong
    O'Malley, Kimberly
    APPLIED MEASUREMENT IN EDUCATION, 2014, 27 (03) : 196 - 213
  • [37] BeGrading: large language models for enhanced feedback in programming education
    Mina Yousef
    Kareem Mohamed
    Walaa Medhat
    Ensaf Hussein Mohamed
    Ghada Khoriba
    Tamer Arafa
    Neural Computing and Applications, 2025, 37 (2) : 1027 - 1040
  • [38] Evaluating the Ability of Large Language Models to Generate Motivational Feedback
    Gaeta, Angelo
    Orciuoli, Francesco
    Pascuzzo, Antonella
    Peduto, Angela
    GENERATIVE INTELLIGENCE AND INTELLIGENT TUTORING SYSTEMS, PT I, ITS 2024, 2024, 14798 : 188 - 201
  • [39] Generating Automatic Feedback on UI Mockups with Large Language Models
    Duan, Peitong
    Warner, Jeremy
    Li, Yang
    Hartmann, Bjoern
    PROCEEDINGS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYTEMS (CHI 2024), 2024,
  • [40] Large Language Models (GPT) for automating feedback on programming assignments
    Pankiewicz, Maciej
    Baker, Ryan S.
    31ST INTERNATIONAL CONFERENCE ON COMPUTERS IN EDUCATION, ICCE 2023, VOL I, 2023, : 68 - 77