Propagating Large Language Models Programming Feedback

被引:0
|
作者
Koutcheme, Charles [1 ]
Hellas, Arto [1 ]
机构
[1] Aalto Univ, Espoo, Finland
关键词
large language models; programming feedback; computer science education;
D O I
10.1145/3657604.3664665
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Large language models (LLMs) such as GPT-4 have emerged as promising tools for providing programming feedback. However, effective deployment of LLMs in massive classes and Massive Open Online Courses (MOOCs) raises financial concerns, calling for methods to minimize the number of calls to the APIs and systems serving such powerful models. In this article, we revisit the problem of 'propagating feedback' within the contemporary landscape of LLMs. Specifically, we explore feedback propagation as a way to reduce the cost of leveraging LLMs for providing programming feedback at scale. Our study investigates the effectiveness of this approach in the context of students requiring next-step hints for Python programming problems, presenting initial results that support the viability of the approach. We discuss our findings' implications and suggest directions for future research in optimizing feedback mechanisms for large-scale educational environments.
引用
收藏
页码:366 / 370
页数:5
相关论文
共 50 条
  • [31] Ironies of Programming Automation: Exploring the Experience of Code Synthesis via Large Language Models
    McCabe, Alan T.
    Bjorkman, Moa
    Engstrom, Joel
    Kuang, Peng
    Soderberg, Emma
    Church, Luke
    [J]. PROCEEDINGS OF THE 8TH INTERNATIONAL CONFERENCE ON THE ART, SCIENCE, AND ENGINEERING OF PROGRAMMING, PROGRAMMING COMPANION 2024, 2024, : 12 - 21
  • [32] A Survey of Programming Language Memory Models
    Moiseenko, E.
    Podkopaev, A.
    Koznov, D.
    [J]. PROGRAMMING AND COMPUTER SOFTWARE, 2021, 47 (06) : 439 - 456
  • [33] A Survey of Programming Language Memory Models
    E. Moiseenko
    A. Podkopaev
    D. Koznov
    [J]. Programming and Computer Software, 2021, 47 : 439 - 456
  • [34] Comparing Feedback from Large Language Models and Instructors: Teaching Computer Science at Scale
    Ha Nguyen
    Stott, Nate
    Allan, Vicki
    [J]. PROCEEDINGS OF THE ELEVENTH ACM CONFERENCE ON LEARNING@SCALE, L@S 2024, 2024, : 335 - 339
  • [35] Large Language Models in der WissenschaftLarge language models in science
    Karl-Friedrich Kowalewski
    Severin Rodler
    [J]. Die Urologie, 2024, 63 (9) : 860 - 866
  • [36] Teach AI How to Code: Using Large Language Models as Teachable Agents for Programming Education
    Jin, Hyoungwook
    Lee, Seonghee
    Shin, Hyungyu
    Kim, Juho
    [J]. PROCEEDINGS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYTEMS, CHI 2024, 2024,
  • [37] Insights from Social Shaping Theory: The Appropriation of Large Language Models in an Undergraduate Programming Course
    Padiyath, Aadarsh
    Hou, Xinying
    Pang, Amy
    Vargas, Diego Viramontes
    Gu, Xingjian
    Nelson-Fromm, Tamara
    Wu, Zihan
    Guzdial, Mark
    Ericson, Barbara
    [J]. 20TH ANNUAL ACM CONFERENCE ON INTERNATIONAL COMPUTING EDUCATION RESEARCH, ICER 2024, VOL 1, 2024, : 114 - 130
  • [38] AI-Tutoring in Software Engineering Education Experiences with Large Language Models in Programming Assessments
    Frankford, Eduard
    Sauerwein, Clemens
    Bassner, Patrick
    Krusche, Stephan
    Breu, Ruth
    [J]. 2024 ACM/IEEE 44TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: SOFTWARE ENGINEERING EDUCATION AND TRAINING, ICSE-SEET 2024, 2024, : 309 - 319
  • [39] Propagating Uncertainty in Power-System DAE Models With Semidefinite Programming
    Choi, Hyungjin
    Seiler, Peter J.
    Dhople, Sairaj V.
    [J]. IEEE TRANSACTIONS ON POWER SYSTEMS, 2017, 32 (04) : 3146 - 3156
  • [40] Dissociating language and thought in large language models
    Mahowald, Kyle
    Ivanova, Anna A.
    Blank, Idan A.
    Kanwisher, Nancy
    Tenenbaum, Joshua B.
    Fedorenko, Evelina
    [J]. TRENDS IN COGNITIVE SCIENCES, 2024, 28 (06) : 517 - 540