Leveraging Large Language Models for Efficient Failure Analysis in Game Development

被引:0
|
作者
Marini, Leonardo [1 ]
Gisslen, Linus [2 ]
Sestini, Alessandro [2 ]
机构
[1] Frostbite, Stockholm, Sweden
[2] SEED Elect Arts EA, Redwood City, CA USA
关键词
Natural language processing; Validation; Tracing; Games; Software Quality; Software development;
D O I
10.1109/CoG60054.2024.10645540
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In games, and more generally in the field of software development, early detection of bugs is vital to maintain a high quality of the final product. Automated tests are a powerful tool that can catch a problem earlier in development by executing periodically. As an example, when new code is submitted to the code base, a new automated test verifies these changes. However, identifying the specific change responsible for a test failure becomes harder when dealing with batches of changes especially in the case of a large-scale project such as a AAA game, where thousands of people contribute to a single code base. This paper proposes a new approach to automatically identify which change in the code caused a test to fail. The method leverages Large Language Models (LLMs) to associate error messages with the corresponding code changes causing the failure. We investigate the effectiveness of our approach with quantitative and qualitative evaluations. Our approach reaches an accuracy of 71% in our newly created dataset, which comprises issues reported by developers at EA over a period of one year. We further evaluated our model through a user study to assess the utility and usability of the tool from a developer perspective, resulting in a significant reduction in time - up to 60% - spent investigating issues.
引用
收藏
页数:8
相关论文
共 50 条
  • [21] Leveraging Large Language Models for Effective Organizational Navigation
    Chandrasekar, Haresh
    Gupta, Srishti
    Liu, Chun-Tzu
    Tsai, Chun-Hua
    PROCEEDINGS OF THE 25TH ANNUAL INTERNATIONAL CONFERENCE ON DIGITAL GOVERNMENT RESEARCH, DGO 2024, 2024, : 1020 - 1022
  • [22] Leveraging large language models to foster equity in healthcare
    Rodriguez, Jorge A.
    Alsentzer, Emily
    Bates, David W.
    JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2024, 31 (09)
  • [23] Leveraging Large Language Models for Clinical Abbreviation Disambiguation
    Hosseini, Manda
    Hosseini, Mandana
    Javidan, Reza
    JOURNAL OF MEDICAL SYSTEMS, 2024, 48 (01)
  • [24] Leveraging large language models for peptide antibiotic design
    Guan, Changge
    Fernandes, Fabiano C.
    Franco, Octavio L.
    de la Fuente-nunez, Cesar
    CELL REPORTS PHYSICAL SCIENCE, 2025, 6 (01):
  • [25] Leveraging large language models for academic conference organization
    Luo, Yuan
    Li, Yikuan
    Ogunyemi, Omolola
    Koski, Eileen
    Himes, Blanca E.
    NPJ DIGITAL MEDICINE, 2025, 8 (01):
  • [26] On Leveraging Large Language Models for Multilingual Intent Discovery
    Chow, Rudolf
    Suen, King yiu
    Lam, Albert Y. S.
    ACM TRANSACTIONS ON MANAGEMENT INFORMATION SYSTEMS, 2025, 16 (01)
  • [27] LEVERAGING LARGE LANGUAGE MODELS WITH VOCABULARY SHARING FOR SIGN LANGUAGE TRANSLATION
    Lee, Huije
    Kim, Jung-Ho
    Hwang, Eui Jun
    Kim, Jaewoo
    Park, Jong C.
    2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,
  • [28] Large language models and the treaty interpretation game
    Nelson, Jack Wright
    CAMBRIDGE INTERNATIONAL LAW JOURNAL, 2023, 12 (02) : 305 - 327
  • [29] Game Generation via Large Language Models
    Hu, Chengpeng
    Zhao, Yunlong
    Liu, Jialin
    2024 IEEE CONFERENCE ON GAMES, COG 2024, 2024,
  • [30] Leveraging large language models for generating responses to patient messages-a subjective analysis
    Liu, Siru
    Mccoy, Allison B.
    Wright, Aileen P.
    Carew, Babatunde
    Genkins, Julian Z.
    Huang, Sean S.
    Peterson, Josh F.
    Steitz, Bryan
    Wright, Adam
    JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2024, 31 (06) : 1367 - 1379