Using Large Language Models for Automated Grading of Student Writing about Science

被引:0
|
作者
Impey, Chris [1 ]
Wenger, Matthew [1 ]
Garuda, Nikhil [1 ]
Golchin, Shahriar [2 ]
Stamer, Sarah [1 ]
机构
[1] Univ Arizona, Dept Astron, Tucson, AZ 85721 USA
[2] Univ Arizona, Dept Comp Sci, Tucson, AZ 85721 USA
基金
美国国家科学基金会;
关键词
Student writing; Science classes; Online education; Assessment; Machine learning; Large language models; ONLINE; ASTRONOMY; RATER;
D O I
10.1007/s40593-024-00453-7
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Assessing writing in large classes for formal or informal learners presents a significant challenge. Consequently, most large classes, particularly in science, rely on objective assessment tools such as multiple-choice quizzes, which have a single correct answer. The rapid development of AI has introduced the possibility of using large language models (LLMs) to evaluate student writing. An experiment was conducted using GPT-4 to determine if machine learning methods based on LLMs can match or exceed the reliability of instructor grading in evaluating short writing assignments on topics in astronomy. The audience consisted of adult learners in three massive open online courses (MOOCs) offered through Coursera. One course was on astronomy, the second was on astrobiology, and the third was on the history and philosophy of astronomy. The results should also be applicable to non-science majors in university settings, where the content and modes of evaluation are similar. The data comprised answers from 120 students to 12 questions across the three courses. GPT-4 was provided with total grades, model answers, and rubrics from an instructor for all three courses. In addition to evaluating how reliably the LLM reproduced instructor grades, the LLM was also tasked with generating its own rubrics. Overall, the LLM was more reliable than peer grading, both in aggregate and by individual student, and approximately matched instructor grades for all three online courses. The implication is that LLMs may soon be used for automated, reliable, and scalable grading of student science writing.
引用
收藏
页数:35
相关论文
共 50 条
  • [21] Automated Unit Test Improvement using Large Language Models at Meta
    Alshahwan, Nadia
    Chheda, Jubin
    Finogenova, Anastasia
    Gokkaya, Beliz
    Harman, Mark
    Harper, Inna
    Marginean, Alexandru
    Sengupta, Shubho
    Wang, Eddy
    COMPANION PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, FSE COMPANION 2024, 2024, : 185 - 196
  • [22] Towards automated phenotype definition extraction using large language models
    Ramya Tekumalla
    Juan M. Banda
    Genomics & Informatics, 22 (1)
  • [23] Enhanced automated code vulnerability repair using large language models
    de-Fitero-Dominguez, David
    Garcia-Lopez, Eva
    Garcia-Cabot, Antonio
    Martinez-Herraiz, Jose-Javier
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 138
  • [24] Large language models and political science
    Linegar, Mitchell
    Kocielnik, Rafal
    Alvarez, R. Michael
    FRONTIERS IN POLITICAL SCIENCE, 2023, 5
  • [25] Science in the age of large language models
    Birhane, Abeba
    Kasirzadeh, Atoosa
    Leslie, David
    Wachter, Sandra
    NATURE REVIEWS PHYSICS, 2023, 5 (05) : 277 - 280
  • [26] Large language models for science and medicine
    Telenti, Amalio
    Auli, Michael
    Hie, Brian L.
    Maher, Cyrus
    Saria, Suchi
    Ioannidis, John P. A.
    EUROPEAN JOURNAL OF CLINICAL INVESTIGATION, 2024, 54 (06)
  • [27] Automated Test Creation Using Large Language Models: A Practical Application
    Hadzhikoleva, Stanka
    Rachovski, Todor
    Ivanov, Ivan
    Hadzhikolev, Emil
    Dimitrov, Georgi
    APPLIED SCIENCES-BASEL, 2024, 14 (19):
  • [28] Science in the age of large language models
    Abeba Birhane
    Atoosa Kasirzadeh
    David Leslie
    Sandra Wachter
    Nature Reviews Physics, 2023, 5 (5) : 277 - 280
  • [29] Automated Assessment of Fidelity and Interpretability: An Evaluation Framework for Large Language Models' Explanations (Student Abstract)
    Kuo, Mu-Tien
    Hsueh, Chih-Chung
    Tsai, Richard Tzong-Han
    THIRTY-EIGTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 21, 2024, : 23554 - 23555