Using Large Language Models for Automated Grading of Student Writing about Science

被引:0
|
作者
Impey, Chris [1 ]
Wenger, Matthew [1 ]
Garuda, Nikhil [1 ]
Golchin, Shahriar [2 ]
Stamer, Sarah [1 ]
机构
[1] Univ Arizona, Dept Astron, Tucson, AZ 85721 USA
[2] Univ Arizona, Dept Comp Sci, Tucson, AZ 85721 USA
基金
美国国家科学基金会;
关键词
Student writing; Science classes; Online education; Assessment; Machine learning; Large language models; ONLINE; ASTRONOMY; RATER;
D O I
10.1007/s40593-024-00453-7
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Assessing writing in large classes for formal or informal learners presents a significant challenge. Consequently, most large classes, particularly in science, rely on objective assessment tools such as multiple-choice quizzes, which have a single correct answer. The rapid development of AI has introduced the possibility of using large language models (LLMs) to evaluate student writing. An experiment was conducted using GPT-4 to determine if machine learning methods based on LLMs can match or exceed the reliability of instructor grading in evaluating short writing assignments on topics in astronomy. The audience consisted of adult learners in three massive open online courses (MOOCs) offered through Coursera. One course was on astronomy, the second was on astrobiology, and the third was on the history and philosophy of astronomy. The results should also be applicable to non-science majors in university settings, where the content and modes of evaluation are similar. The data comprised answers from 120 students to 12 questions across the three courses. GPT-4 was provided with total grades, model answers, and rubrics from an instructor for all three courses. In addition to evaluating how reliably the LLM reproduced instructor grades, the LLM was also tasked with generating its own rubrics. Overall, the LLM was more reliable than peer grading, both in aggregate and by individual student, and approximately matched instructor grades for all three online courses. The implication is that LLMs may soon be used for automated, reliable, and scalable grading of student science writing.
引用
收藏
页数:35
相关论文
共 50 条
  • [31] Automatic Grading of Short Answers Using Large Language Models in Software Engineering Courses
    Duong, Ta Nguyen Binh
    Meng, Chai Yi
    2024 IEEE GLOBAL ENGINEERING EDUCATION CONFERENCE, EDUCON 2024, 2024,
  • [32] Large Language Models for Automated Data Science: Introducing CAAFE for Context-Aware Automated Feature Engineering
    Hollmann, Noah
    Mueller, Samuel
    Hutter, Frank
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [33] Large Language Models for Automated Program Repair
    Ribeiro, Francisco
    COMPANION PROCEEDINGS OF THE 2023 ACM SIGPLAN INTERNATIONAL CONFERENCE ON SYSTEMS, PROGRAMMING, LANGUAGES, AND APPLICATIONS: SOFTWARE FOR HUMANITY, SPLASH COMPANION 2023, 2023, : 7 - 9
  • [34] Large Language Models for Automated Program Repair
    Ribeiro, Francisco
    SPLASH Companion 2023 - Companion Proceedings of the 2023 ACM SIGPLAN International Conference on Systems, Programming, Languages, and Applications: Software for Humanity, 2023, : 7 - 9
  • [35] Automated Topic Analysis with Large Language Models
    Kirilenko, Andrei
    Stepchenkova, Svetlana
    INFORMATION AND COMMUNICATION TECHNOLOGIES IN TOURISM 2024, ENTER 2024, 2024, : 29 - 34
  • [36] LaMPost: AI Writing Assistance for Adults with Dyslexia Using Large Language Models
    Goodman, Steven M.
    Buehler, Erin
    Clary, Patrick
    Coenen, Andy
    Donsbach, Aaron
    Horne, Tiffanie N.
    Lahav, Michal
    Macdonald, Robert
    Michaels, Rain Breaw
    Narayanan, Ajit
    Pushkarna, Mahima
    Riley, Joel
    Santana, Alex
    Shi, Lei
    Sweeney, Rachel
    Weaver, Phil
    Yuan, Ann
    Morris, Meredith Ringel
    COMMUNICATIONS OF THE ACM, 2024, 67 (09)
  • [37] Investigating Conflation of Sex and Gender Language in Student Writing About Genetics
    Molly A. M. Stuhlsatz
    Zoë E. Buck Bracey
    Brian M. Donovan
    Science & Education, 2020, 29 : 1567 - 1594
  • [38] Investigating Conflation of Sex and Gender Language in Student Writing About Genetics
    Stuhlsatz, Molly A. M.
    Buck Bracey, Zoe E.
    Donovan, Brian M.
    SCIENCE & EDUCATION, 2020, 29 (06) : 1567 - 1594
  • [39] An Empirical Evaluation of Using Large Language Models for Automated Unit Test Generation
    Schafer, Max
    Nadi, Sarah
    Eghbali, Aryaz
    Tip, Frank
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2024, 50 (01) : 85 - 105
  • [40] Automated Retrosynthesis Planning of Macromolecules Using Large Language Models and Knowledge Graphs
    Ma, Qinyu
    Zhou, Yuhao
    Li, Jianfeng
    MACROMOLECULAR RAPID COMMUNICATIONS, 2025,