Generating and Reviewing Programming Codes with Large Language Models A Systematic Mapping Study

被引:0
|
作者
Lins de Albuquerque, Beatriz Ventorini [1 ,2 ]
Souza da Cunha, Antonio Fernando [1 ,2 ]
Souza, Leonardo [1 ]
Matsui Siqueira, Sean Wolfgand [1 ]
dos Santos, Rodrigo Pereira [1 ]
机构
[1] Univ Fed Estado Rio de Janeiro UNIRIO, Rio De Janeiro, RJ, Brazil
[2] Petrobras Petr Brasileiro SA, Rio De Janeiro, RJ, Brazil
关键词
Code Generation; code completion; code auto-suggestion; automatic refactoring; natural language models; transformer architecture; neural network; LLM; systematic mapping study; FIT;
D O I
10.1145/3658271.3658342
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Context: The proliferation of technologies based on Large Language Models (LLM) is reshaping various domains, also impacting on programming code creation and review. Problem: The decision-making process in adopting LLM in software development demands an understanding of associated challenges and diverse application possibilities. Solution: This study addresses the identified challenges linked to LLM utilization in programming code processes. It explores models, utilization strategies, challenges, and coping mechanisms, focusing on the perspectives of researchers in software development. IS Theory: Drawing on Task-Technology Fit (TTF) theory, the research examines the alignment between task characteristics in code generation and review, and LLM technology attributes to discern performance impacts and utilization patterns. Method: Employing the Systematic Mapping of the Literature method, the research analyzes 19 selected studies from digital databases-IEEE Digital Library, Compendex Engineering Village, and Scopus-out of 1,257 retrieved results. Summary of Results: The research reveals 23 models, 13 utilization strategies, 15 challenges, and 14 coping mechanisms associated with LLM in programming code processes, offering a comprehensive understanding of the application landscape. Contributions to IS: Contributing to the Information Systems (IS) field, This study provides valuable insights into the utilization of LLM in programming code generation and review. The identified models, strategies, challenges, and coping mechanisms offer practical guidance for decision-making processes related to LLM technology adoption. The research aims to support the IS community in effectively navigating the complexities of integrating large language models into the dynamic software development lifecycle.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] Unveiling the potential of large language models in generating semantic and cross-language clones
    Roy, Palash R.
    Alam, Ajmain I.
    Al-omari, Farouq
    Roy, Banani
    Roy, Chanchal K.
    Schneider, Kevin A.
    2023 IEEE 17TH INTERNATIONAL WORKSHOP ON SOFTWARE CLONES, IWSC 2023, 2023, : 22 - 28
  • [32] BeGrading: large language models for enhanced feedback in programming education
    Mina Yousef
    Kareem Mohamed
    Walaa Medhat
    Ensaf Hussein Mohamed
    Ghada Khoriba
    Tamer Arafa
    Neural Computing and Applications, 2025, 37 (2) : 1027 - 1040
  • [33] Developing an Interactive OpenMP Programming Book with Large Language Models
    Yi, Xinyao
    Wang, Anjia
    Yan, Yonghong
    Liao, Chunhua
    ADVANCING OPENMP FOR FUTURE ACCELERATORS, IWOMP 2024, 2024, 15195 : 176 - 194
  • [34] Significant Productivity Gains through Programming with Large Language Models
    Weber T.
    Brandmaier M.
    Schmidt A.
    Mayer S.
    Proceedings of the ACM on Human-Computer Interaction, 2024, 8 (EICS)
  • [35] Using Large Language Models to Enhance Programming Error Messages
    Leinonen, Juho
    Hellas, Arto
    Sarsa, Sami
    Reeves, Brent
    Denny, Paul
    Prather, James
    Becker, Brett A.
    PROCEEDINGS OF THE 54TH ACM TECHNICAL SYMPOSIUM ON COMPUTER SCIENCE EDUCATION, VOL 1, SIGCSE 2023, 2023, : 563 - 569
  • [36] Programming Computational Electromagnetic Applications Assisted by Large Language Models
    Fernandes, Leandro Carisio
    IEEE ANTENNAS AND PROPAGATION MAGAZINE, 2024, 66 (01) : 63 - 71
  • [37] Large Language Models (GPT) for automating feedback on programming assignments
    Pankiewicz, Maciej
    Baker, Ryan S.
    31ST INTERNATIONAL CONFERENCE ON COMPUTERS IN EDUCATION, ICCE 2023, VOL I, 2023, : 68 - 77
  • [38] Automated Programming Exercise Generation in the Era of Large Language Models
    Meissner, Niklas
    Speth, Sandro
    Becker, Steffen
    2024 36TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING EDUCATION AND TRAINING, CSEE & T 2024, 2024,
  • [39] Leveraging Large Language Models for Generating Personalized Care Recommendations in Dementia
    Hu, Hsiang-Wei
    Lin, Yu-chun
    Chia, Chang-Hung
    Chuang, Ethan
    Yang, Cheng Ru
    2024 IEEE INTERNATIONAL WORKSHOP ON ELECTROMAGNETICS: APPLICATIONS AND STUDENT INNOVATION COMPETITION, IWEM 2024, 2024,
  • [40] Generating Natural Language Adversarial Examples on a Large Scale with Generative Models
    Ren, Yankun
    Lin, Jianbin
    Tang, Siliang
    Zhou, Jun
    Yang, Shuang
    Qi, Yuan
    Ren, Xiang
    ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, 325 : 2156 - 2163