PEER: Empowering Writing with Large Language Models

被引:7
|
作者
Sessler, Kathrin [1 ]
Xiang, Tao [1 ]
Bogenrieder, Lukas [1 ]
Kasneci, Enkelejda [1 ]
机构
[1] Tech Univ Munich, Munich, Germany
关键词
Large Language Models; Writing; Personalized Education;
D O I
10.1007/978-3-031-42682-7_73
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The emerging research area of large language models (LLMs) has far-reaching implications for various aspects of our daily lives. In education, in particular, LLMs hold enormous potential for enabling personalized learning and equal opportunities for all students. In a traditional classroom environment, students often struggle to develop individual writing skills because the workload of the teachers limits their ability to provide detailed feedback on each student's essay. To bridge this gap, we have developed a tool called PEER (Paper Evaluation and Empowerment Resource) which exploits the power of LLMs and provides students with comprehensive and engaging feedback on their essays. Our goal is to motivate each student to enhance their writing skills through positive feedback and specific suggestions for improvement. Since its launch in February 2023, PEER has received high levels of interest and demand, resulting in more than 4000 essays uploaded to the platform to date. Moreover, there has been an overwhelming response from teachers who are interested in the project since it has the potential to alleviate their workload by making the task of grading essays less tedious. By collecting a real-world data set incorporating essays of students and feedback from teachers, we will be able to refine and enhance PEER through model fine-tuning in the next steps. Our goal is to leverage LLMs to enhance personalized learning, reduce teacher workload, and ensure that every student has an equal opportunity to excel in writing. The code is available at https://github.com/Kasneci-Lab/AI-assisted- writing.
引用
收藏
页码:755 / 761
页数:7
相关论文
共 50 条
  • [41] Using Large Language Models for Automated Grading of Student Writing about Science
    Impey, Chris
    Wenger, Matthew
    Garuda, Nikhil
    Golchin, Shahriar
    Stamer, Sarah
    INTERNATIONAL JOURNAL OF ARTIFICIAL INTELLIGENCE IN EDUCATION, 2025,
  • [42] The Use of Large Language Models in Scientific Manuscript Writing and Editing in Interventional Radiology
    Goh, Gerard S.
    Clements, Warren
    Bilhim, Tiago
    Hausegger, Klaus
    CARDIOVASCULAR AND INTERVENTIONAL RADIOLOGY, 2025, 48 (03) : 286 - 287
  • [43] Large language models and automated essay scoring of English language learner writing: Insights into validity and reliability
    Pack A.
    Barrett A.
    Escalante J.
    Computers and Education: Artificial Intelligence, 2024, 6
  • [44] Peer response in second language writing classrooms.
    Bunting, J
    MODERN LANGUAGE JOURNAL, 2004, 88 (01): : 142 - 144
  • [45] Investigating the impact of peer feedback in foreign language writing
    Altstaedter, Laura Levi
    INNOVATION IN LANGUAGE LEARNING AND TEACHING, 2018, 12 (02) : 137 - 151
  • [46] PEER MODELS AND LANGUAGE-ACQUISITION
    HAMILTON, ML
    STEWART, DM
    MERRILL-PALMER QUARTERLY-JOURNAL OF DEVELOPMENTAL PSYCHOLOGY, 1977, 23 (01): : 45 - 55
  • [47] LEVERAGING LARGE LANGUAGE MODELS (LLMS) FOR CLASSIFYING PEER REVIEWED PUBLICATIONS FOR LITERATURE REVIEW
    Lee, S. H.
    Chacko, A.
    Yankovsky, A.
    VALUE IN HEALTH, 2024, 27 (06) : S262 - S262
  • [48] Effects of collaborative writing and peer feedback on Spanish as a foreign language writing performance
    Camino Bueno-Alastuey, Ma
    Vasseur, Raychel
    Elola, Idoia
    FOREIGN LANGUAGE ANNALS, 2022, 55 (02) : 517 - 539
  • [49] Empowering Users with ChatGPT and Similar Large Language Models (LLMs): Everyday Information Needs, Uses, and Gratification
    Ju, Boryung
    Stewart, J. Brenton
    Proceedings of the Association for Information Science and Technology, 2024, 61 (01) : 172 - 182
  • [50] Empowering Language Learners: Harnessing Computer-Based Writing for Enhanced Chinese Language Proficiency
    Wong, Lung-Hsiang
    Aw, Guat Poh
    Chen, Wenli
    Cheung, Yin Ling
    Sim, Seok Hwa
    31ST INTERNATIONAL CONFERENCE ON COMPUTERS IN EDUCATION, ICCE 2023, VOL II, 2023, : 145 - 154