Leveraging Large Language Models for Efficient Failure Analysis in Game Development

被引:0
|
作者
Marini, Leonardo [1 ]
Gisslen, Linus [2 ]
Sestini, Alessandro [2 ]
机构
[1] Frostbite, Stockholm, Sweden
[2] SEED Elect Arts EA, Redwood City, CA USA
关键词
Natural language processing; Validation; Tracing; Games; Software Quality; Software development;
D O I
10.1109/CoG60054.2024.10645540
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In games, and more generally in the field of software development, early detection of bugs is vital to maintain a high quality of the final product. Automated tests are a powerful tool that can catch a problem earlier in development by executing periodically. As an example, when new code is submitted to the code base, a new automated test verifies these changes. However, identifying the specific change responsible for a test failure becomes harder when dealing with batches of changes especially in the case of a large-scale project such as a AAA game, where thousands of people contribute to a single code base. This paper proposes a new approach to automatically identify which change in the code caused a test to fail. The method leverages Large Language Models (LLMs) to associate error messages with the corresponding code changes causing the failure. We investigate the effectiveness of our approach with quantitative and qualitative evaluations. Our approach reaches an accuracy of 71% in our newly created dataset, which comprises issues reported by developers at EA over a period of one year. We further evaluated our model through a user study to assess the utility and usability of the tool from a developer perspective, resulting in a significant reduction in time - up to 60% - spent investigating issues.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Leveraging Large Language Models for Efficient Alert Aggregation in AIOPs
    Zha, Junjie
    Shan, Xinwen
    Lu, Jiaxin
    Zhu, Jiajia
    Liu, Zihan
    ELECTRONICS, 2024, 13 (22)
  • [2] Leveraging Large Language Models for Automated Dialogue Analysis
    Finch, Sarah E.
    Paek, Ellie S.
    Choi, Jinho D.
    24TH MEETING OF THE SPECIAL INTEREST GROUP ON DISCOURSE AND DIALOGUE, SIGDIAL 2023, 2023, : 202 - 215
  • [3] Leveraging large language models for data analysis automation
    Jansen, Jacqueline A.
    Manukyan, Artur
    Al Khoury, Nour
    Akalin, Altuna
    PLOS ONE, 2025, 20 (02):
  • [4] Leveraging the OPT Large Language Model for Sentiment Analysis of Game Reviews
    Viggiato, Markos
    Bezemer, Cor-Paul
    IEEE TRANSACTIONS ON GAMES, 2024, 16 (02) : 493 - 496
  • [5] Leveraging large language models in dermatology
    Matin, Rubeta N.
    Linos, Eleni
    Rajan, Neil
    BRITISH JOURNAL OF DERMATOLOGY, 2023, 189 (03) : 253 - 254
  • [6] Leveraging Large Language Models for Analysis of Student Course Feedback
    Wang, Zixuan
    Denny, Paul
    Leinonen, Juho
    Luxton-Reilly, Andrew
    PROCEEDINGS OF THE 16TH ANNUAL ACM INDIA COMPUTE CONFERENCE, COMPUTE 2023, 2023, : 76 - 79
  • [7] Leveraging Large Language Models for Enhanced VR Development: Insights and Challenges
    Alkhayat, Amany
    Ciranni, Brett
    Tumuluri, Rupa Samyukta
    Tulasi, Rohit Srinivas
    2024 IEEE GAMING, ENTERTAINMENT, AND MEDIA CONFERENCE, GEM 2024, 2024, : 76 - 81
  • [8] Leveraging the Potential of Large Language Models in Education Through Playful and Game-Based Learning
    Stefan E. Huber
    Kristian Kiili
    Steve Nebel
    Richard M. Ryan
    Michael Sailer
    Manuel Ninaus
    Educational Psychology Review, 2024, 36
  • [9] Leveraging the Potential of Large Language Models in Education Through Playful and Game-Based Learning
    Huber, Stefan E.
    Kiili, Kristian
    Nebel, Steve
    Ryan, Richard M.
    Sailer, Michael
    Ninaus, Manuel
    EDUCATIONAL PSYCHOLOGY REVIEW, 2024, 36 (01)
  • [10] Leveraging large language models for predictive chemistry
    Kevin Maik Jablonka
    Philippe Schwaller
    Andres Ortega-Guerrero
    Berend Smit
    Nature Machine Intelligence, 2024, 6 : 161 - 169