Propagating Large Language Models Programming Feedback

被引:0
|
作者
Koutcheme, Charles [1 ]
Hellas, Arto [1 ]
机构
[1] Aalto Univ, Espoo, Finland
关键词
large language models; programming feedback; computer science education;
D O I
10.1145/3657604.3664665
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Large language models (LLMs) such as GPT-4 have emerged as promising tools for providing programming feedback. However, effective deployment of LLMs in massive classes and Massive Open Online Courses (MOOCs) raises financial concerns, calling for methods to minimize the number of calls to the APIs and systems serving such powerful models. In this article, we revisit the problem of 'propagating feedback' within the contemporary landscape of LLMs. Specifically, we explore feedback propagation as a way to reduce the cost of leveraging LLMs for providing programming feedback at scale. Our study investigates the effectiveness of this approach in the context of students requiring next-step hints for Python programming problems, presenting initial results that support the viability of the approach. We discuss our findings' implications and suggest directions for future research in optimizing feedback mechanisms for large-scale educational environments.
引用
收藏
页码:366 / 370
页数:5
相关论文
共 50 条
  • [1] Large Language Models (GPT) for automating feedback on programming assignments
    Pankiewicz, Maciej
    Baker, Ryan S.
    [J]. 31ST INTERNATIONAL CONFERENCE ON COMPUTERS IN EDUCATION, ICCE 2023, VOL I, 2023, : 68 - 77
  • [2] Evaluating the Application of Large Language Models to Generate Feedback in Programming Education
    Jacobs, Sven
    Jaschke, Steffen
    [J]. 2024 IEEE GLOBAL ENGINEERING EDUCATION CONFERENCE, EDUCON 2024, 2024,
  • [3] Prompting Is Programming: A Query Language for Large Language Models
    Beurer-Kellner, Luca
    Fischer, Marc
    Vechev, Martin
    [J]. PROCEEDINGS OF THE ACM ON PROGRAMMING LANGUAGES-PACMPL, 2023, 7 (PLDI):
  • [4] AskIt: Unified Programming Interface for Programming with Large Language Models
    Okuda, Katsumi
    Amarasinghe, Saman
    [J]. 2024 IEEE/ACM INTERNATIONAL SYMPOSIUM ON CODE GENERATION AND OPTIMIZATION, CGO, 2024, : 41 - 54
  • [5] Fully Autonomous Programming with Large Language Models
    Liventsev, Vadim
    Grishina, Anastasiia
    Harma, Aki
    Moonen, Leon
    [J]. PROCEEDINGS OF THE 2023 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, GECCO 2023, 2023, : 1146 - 1155
  • [6] Generative Relevance Feedback with Large Language Models
    Mackie, Iain
    Chatterjee, Shubham
    Dalton, Jeffrey
    [J]. PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023, 2023, : 2026 - 2031
  • [7] Large Language Models in Robot Programming Potential in the programming of industrial robots
    Syniawa, Daniel
    Ates, Baris
    Boshoff, Marius
    Kuhlenkoetter, Bernd
    [J]. ATP MAGAZINE, 2024, (6-7):
  • [8] Reliable Natural Language Understanding with Large Language Models and Answer Set Programming
    Rajasekharan, Abhiramon
    Zeng, Yankai
    Padalkar, Parth
    Gupta, Gopal
    [J]. ELECTRONIC PROCEEDINGS IN THEORETICAL COMPUTER SCIENCE, 2023, (385): : 274 - 287
  • [9] Reliable Natural Language Understanding with Large Language Models and Answer Set Programming
    Rajasekharan, Abhiramon
    Zeng, Yankai
    Padalkar, Parth
    Gupta, Gopal
    [J]. Electronic Proceedings in Theoretical Computer Science, EPTCS, 2023, 385 : 274 - 287
  • [10] Programming Computational Electromagnetic Applications Assisted by Large Language Models
    Fernandes, Leandro Carisio
    [J]. IEEE ANTENNAS AND PROPAGATION MAGAZINE, 2024, 66 (01) : 63 - 71