PEER: Empowering Writing with Large Language Models

被引:7
|
作者
Sessler, Kathrin [1 ]
Xiang, Tao [1 ]
Bogenrieder, Lukas [1 ]
Kasneci, Enkelejda [1 ]
机构
[1] Tech Univ Munich, Munich, Germany
关键词
Large Language Models; Writing; Personalized Education;
D O I
10.1007/978-3-031-42682-7_73
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The emerging research area of large language models (LLMs) has far-reaching implications for various aspects of our daily lives. In education, in particular, LLMs hold enormous potential for enabling personalized learning and equal opportunities for all students. In a traditional classroom environment, students often struggle to develop individual writing skills because the workload of the teachers limits their ability to provide detailed feedback on each student's essay. To bridge this gap, we have developed a tool called PEER (Paper Evaluation and Empowerment Resource) which exploits the power of LLMs and provides students with comprehensive and engaging feedback on their essays. Our goal is to motivate each student to enhance their writing skills through positive feedback and specific suggestions for improvement. Since its launch in February 2023, PEER has received high levels of interest and demand, resulting in more than 4000 essays uploaded to the platform to date. Moreover, there has been an overwhelming response from teachers who are interested in the project since it has the potential to alleviate their workload by making the task of grading essays less tedious. By collecting a real-world data set incorporating essays of students and feedback from teachers, we will be able to refine and enhance PEER through model fine-tuning in the next steps. Our goal is to leverage LLMs to enhance personalized learning, reduce teacher workload, and ensure that every student has an equal opportunity to excel in writing. The code is available at https://github.com/Kasneci-Lab/AI-assisted- writing.
引用
收藏
页码:755 / 761
页数:7
相关论文
共 50 条
  • [31] Empowering Psychotherapy with Large Language Models: Cognitive Distortion Detection through Diagnosis of Thought Prompting
    Chen, Zhiyu
    Lu, Yujie
    Wang, William Yang
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 4295 - 4304
  • [32] Empowering Molecule Discovery for Molecule-Caption Translation With Large Language Models: A ChatGPT Perspective
    Li, Jiatong
    Liu, Yunqing
    Fan, Wenqi
    Wei, Xiao-Yong
    Liu, Hui
    Tang, Jiliang
    Li, Qing
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (11) : 6071 - 6083
  • [33] LOGIC-LM: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning
    Pan, Liangming
    Albalak, Alon
    Wang, Xinyi
    Wang, William Yang
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 3806 - 3824
  • [34] Empowering Few-Shot Recommender Systems With Large Language Models-Enhanced Representations
    Wang, Zhoumeng
    IEEE ACCESS, 2024, 12 : 29144 - 29153
  • [35] Empowering Large Language Models to Leverage Domain-Specific Knowledge in E-Learning
    Lu, Ruei-Shan
    Lin, Ching-Chang
    Tsao, Hsiu-Yuan
    APPLIED SCIENCES-BASEL, 2024, 14 (12):
  • [36] WRITING BY IMITATING LANGUAGE MODELS
    CRAMER, RL
    CRAMER, BB
    LANGUAGE ARTS, 1975, 52 (07) : 1011 - &
  • [37] Revolution or Peril? The Controversial Role of Large Language Models in Medical Manuscript Writing
    Milian, Ricardo Diaz
    Franco, Pablo Moreno
    Freeman, William D.
    Halamka, John D.
    MAYO CLINIC PROCEEDINGS, 2023, 98 (10) : 1444 - 1448
  • [38] The Ethics of (Non)disclosure: Large Language Models in Professional, Nonacademic Writing Contexts
    Piller, Erick
    RUPKATHA JOURNAL ON INTERDISCIPLINARY STUDIES IN HUMANITIES, 2023, 15 (04):
  • [39] LaMPost: AI Writing Assistance for Adults with Dyslexia Using Large Language Models
    Goodman, Steven M.
    Buehler, Erin
    Clary, Patrick
    Coenen, Andy
    Donsbach, Aaron
    Horne, Tiffanie N.
    Lahav, Michal
    Macdonald, Robert
    Michaels, Rain Breaw
    Narayanan, Ajit
    Pushkarna, Mahima
    Riley, Joel
    Santana, Alex
    Shi, Lei
    Sweeney, Rachel
    Weaver, Phil
    Yuan, Ann
    Morris, Meredith Ringel
    COMMUNICATIONS OF THE ACM, 2024, 67 (09)
  • [40] Understanding Radiological Journal Views and Policies on Large Language Models in Academic Writing
    Lee, Tai-Lin
    Ding, Julia
    Trivedi, Hari M.
    Gichoya, Judy W.
    Moon, John T.
    Li, Hanzhou
    JOURNAL OF THE AMERICAN COLLEGE OF RADIOLOGY, 2024, 21 (04) : 678 - 682