Formative Feedback on Student-Authored Summaries in Intelligent Textbooks Using Large Language Models

被引:0
|
作者
Morris, Wesley [1 ]
Crossley, Scott [1 ]
Holmes, Langdon [1 ]
Ou, Chaohua [2 ]
Dascalu, Mihai [3 ]
Mcnamara, Danielle [4 ]
机构
[1] Vanderbilt Univ, Nashville, TN 37235 USA
[2] Georgia Inst Technol, Atlanta, GA USA
[3] Univ Politehn Bucuresti, Bucharest, Romania
[4] Arizona State Univ, Tempe, AZ USA
基金
美国国家科学基金会;
关键词
Intelligent textbooks; Large language models; Automated summary scoring; Transformers;
D O I
10.1007/s40593-024-00395-0
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
As intelligent textbooks become more ubiquitous in classrooms and educational settings, the need to make them more interactive arises. An alternative is to ask students to generate knowledge in response to textbook content and provide feedback about the produced knowledge. This study develops Natural Language Processing models to automatically provide feedback to students about the quality of summaries written at the end of intelligent textbook sections. The study builds on the work of Botarleanu et al. (2022), who used a Longformer Large Language Model (LLM) to develop a summary grading model. Their model explained around 55% of holistic summary score variance as assigned by human raters. This study uses a principal component analysis to distill summary scores from an analytic rubric into two principal components - content and wording. This study uses two encoder-only classification large language models finetuned from Longformer on the summaries and the source texts using these principal components explained 82% and 70% of the score variance for content and wording, respectively. On a dataset of summaries collected on the crowd-sourcing site Prolific, the content model was shown to be robust although the accuracy of the wording model was reduced compared to the training set. The developed models are freely available on HuggingFace and will allow formative feedback to users of intelligent textbooks to assess reading comprehension through summarization in real time. The models can also be used for other summarization applications in learning systems.
引用
收藏
页数:22
相关论文
共 50 条
  • [41] Large language models for sustainable assessment and feedback in higher education
    Agostini, Daniele
    Picasso, Federica
    INTELLIGENZA ARTIFICIALE, 2024, 18 (01) : 121 - 138
  • [42] Using large language models in psychology
    Demszky, Dorottya
    Yang, Diyi
    Yeager, David
    Bryan, Christopher
    Clapper, Margarett
    Chandhok, Susannah
    Eichstaedt, Johannes
    Hecht, Cameron
    Jamieson, Jeremy
    Johnson, Meghann
    Jones, Michaela
    Krettek-Cobb, Danielle
    Lai, Leslie
    Jonesmitchell, Nirel
    Ong, Desmond
    Dweck, Carol
    Gross, James
    Pennebaker, James
    NATURE REVIEWS PSYCHOLOGY, 2023, 2 (11): : 688 - 701
  • [43] Using large language models in psychology
    Dorottya Demszky
    Diyi Yang
    David S. Yeager
    Christopher J. Bryan
    Margarett Clapper
    Susannah Chandhok
    Johannes C. Eichstaedt
    Cameron Hecht
    Jeremy Jamieson
    Meghann Johnson
    Michaela Jones
    Danielle Krettek-Cobb
    Leslie Lai
    Nirel JonesMitchell
    Desmond C. Ong
    Carol S. Dweck
    James J. Gross
    James W. Pennebaker
    Nature Reviews Psychology, 2023, 2 : 688 - 701
  • [44] Using large language models wisely
    不详
    NATURE ASTRONOMY, 2025, 9 (03): : 315 - 315
  • [45] Applying Large Language Models for intelligent industrial automation From theory to application: Towards autonomous systems with Large Language Models
    Xia, Yuchen
    Jazdi, Nasser
    Weyrich, Michael
    ATP MAGAZINE, 2024, (6-7):
  • [46] Beyond Textbooks: A Novel Workflow for Customized Vocabulary Sheet Generation with Large Language Models
    Ngoc-Sang Vo
    Ngoc-Thanh-Xuan Nguyen
    Tan-Phuoc Pham
    Hoang-Anh Pham
    INTELLIGENCE OF THINGS: TECHNOLOGIES AND APPLICATIONS, ICIT 2024, VOL 2, 2025, 230 : 208 - 220
  • [47] ParroT: Translating during Chat using Large Language Models tuned with Human Translation and Feedback
    Jiao, Wenxiang
    Huang, Jieting
    Wang, Wenxuan
    He, Zhiwei
    Liang, Tian
    Wang, Xing
    Shi, Shuming
    Tu, Zhaopeng
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 15009 - 15020
  • [48] Using Software Tools To Provide Students in Large Classes with Individualized Formative Feedback
    Hedtrich, Sebastian
    Graulich, Nicole
    JOURNAL OF CHEMICAL EDUCATION, 2018, 95 (12) : 2263 - 2267
  • [49] Comparing Scoring Consistency of Large Language Models with Faculty for Formative Assessments in Medical Education
    Sreedhar, Radhika
    Chang, Linda
    Gangopadhyaya, Ananya
    Shiels, Peggy Woziwodzki
    Loza, Julie
    Chi, Euna
    Gabel, Elizabeth
    Park, Yoon Soo
    JOURNAL OF GENERAL INTERNAL MEDICINE, 2025, 40 (01) : 127 - 134
  • [50] Leveraging Large Language Models for Realizing Truly Intelligent User Interfaces
    Oelen, Allard
    Auer, Soeren
    EXTENDED ABSTRACTS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2024, 2024,