PEER: Empowering Writing with Large Language Models

被引:7
|
作者
Sessler, Kathrin [1 ]
Xiang, Tao [1 ]
Bogenrieder, Lukas [1 ]
Kasneci, Enkelejda [1 ]
机构
[1] Tech Univ Munich, Munich, Germany
关键词
Large Language Models; Writing; Personalized Education;
D O I
10.1007/978-3-031-42682-7_73
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The emerging research area of large language models (LLMs) has far-reaching implications for various aspects of our daily lives. In education, in particular, LLMs hold enormous potential for enabling personalized learning and equal opportunities for all students. In a traditional classroom environment, students often struggle to develop individual writing skills because the workload of the teachers limits their ability to provide detailed feedback on each student's essay. To bridge this gap, we have developed a tool called PEER (Paper Evaluation and Empowerment Resource) which exploits the power of LLMs and provides students with comprehensive and engaging feedback on their essays. Our goal is to motivate each student to enhance their writing skills through positive feedback and specific suggestions for improvement. Since its launch in February 2023, PEER has received high levels of interest and demand, resulting in more than 4000 essays uploaded to the platform to date. Moreover, there has been an overwhelming response from teachers who are interested in the project since it has the potential to alleviate their workload by making the task of grading essays less tedious. By collecting a real-world data set incorporating essays of students and feedback from teachers, we will be able to refine and enhance PEER through model fine-tuning in the next steps. Our goal is to leverage LLMs to enhance personalized learning, reduce teacher workload, and ensure that every student has an equal opportunity to excel in writing. The code is available at https://github.com/Kasneci-Lab/AI-assisted- writing.
引用
收藏
页码:755 / 761
页数:7
相关论文
共 50 条
  • [1] Empowering Large Language Models for Textual Data Augmentation
    Li, Yichuan
    Ding, Kaize
    Wang, Jianling
    Lee, Kyumin
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 12734 - 12751
  • [2] chatHPC: Empowering HPC users with large language models
    Yin, Junqi
    Hines, Jesse
    Herron, Emily
    Ghosal, Tirthankar
    Liu, Hong
    Prentice, Suzanne
    Lama, Vanessa
    Wang, Feiyi
    JOURNAL OF SUPERCOMPUTING, 2025, 81 (01):
  • [3] Large language models and the future of academic writing
    Nayak, P.
    Gogtay, N. J.
    JOURNAL OF POSTGRADUATE MEDICINE, 2024, 70 (02) : 67 - 68
  • [4] Wordcraft Story Writing With Large Language Models
    Yuan, Ann
    Coenen, Andy
    Reif, Emily
    Ippolito, Daphne
    IUI'22: 27TH INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES, 2022, : 841 - 852
  • [5] Empowering Time Series Analysis with Large Language Models: A Survey
    Jiang, Yushan
    Pan, Zijie
    Zhang, Xikun
    Garg, Sahil
    Schneider, Anderson
    Nevmyvaka, Yuriy
    Song, Dongjin
    PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 8095 - 8103
  • [6] AtomTool: Empowering Large Language Models with Tool Utilization Skills
    Li, Yongle
    Zhang, Zheng
    Zhang, Junqi
    Hu, Wenbo
    Wu, Yongyu
    Hong, Richang
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT 1, 2025, 15031 : 323 - 337
  • [7] PointLLM: Empowering Large Language Models to Understand Point Clouds
    Xu, Runsen
    Wang, Xiaolong
    Wang, Tai
    Chen, Yilun
    Pang, Jiangmiao
    Lin, Dahua
    COMPUTER VISION - ECCV 2024, PT XXV, 2025, 15083 : 131 - 147
  • [8] The dangers of using large language models for peer review
    Carr, Edward J.
    Wu, Mary Y.
    Gahir, Joshua
    Harvey, Ruth
    Townsley, Hermaleigh
    Bailey, Chris
    Fowler, Ashley S.
    Dowgier, Giulia
    Hobbs, Agnieszka
    Herman, Lou
    Ragno, Martina
    Miah, Murad
    Bawumia, Phillip
    Smith, Callie
    Miranda, Mauro
    Mears, Harriet, V
    Adams, Lorin
    Haptipoglu, Emine
    O'Reilly, Nicola
    Warchal, Scott
    Sawyer, Chelsea
    Ambrose, Karen
    Kelly, Gavin
    Beale, Rupert
    Papineni, Padmasayee
    Corrah, Tumena
    Gilson, Richard
    Gamblin, Steve
    Kassiotis, George
    Libri, Vincenzo
    Williams, Bryan
    Swanton, Charles
    Gandhi, Sonia
    Bauer, David L., V
    Wall, Emma C.
    LANCET INFECTIOUS DISEASES, 2023, 23 (07): : 781 - 781
  • [9] Empowering Smart Glasses with Large Language Models: Towards Ubiquitous AGI
    Zhang, Dell
    Li, Yongxiang
    He, Zhongjiang
    Li, Xuelong
    COMPANION OF THE 2024 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING, UBICOMP COMPANION 2024, 2024, : 631 - 633
  • [10] Potential impact of large language models on academic writing
    Alahdab, Fares
    BMJ EVIDENCE-BASED MEDICINE, 2024, 29 (03) : 201 - 202