An Exploratory Study on Using Large Language Models for Mutation Testing

被引:0
|
作者
Wang, Bo [1 ]
Chen, Mingda [1 ]
Lin, Youfang [1 ]
Papadakis, Mike [2 ]
Zhang, Jie M. [3 ]
机构
[1] Beijing Jiaotong University, Beijing, China
[2] University of Luxembourg, Luxembourg
[3] King’s College London, London, United Kingdom
来源
关键词
Exploratory studies - Faults detection - Language model - Large language model - Large-scales - Mutation generation - Mutation rates - Mutation testing - Performance - Software testings;
D O I
暂无
中图分类号
学科分类号
摘要
90
引用
收藏
相关论文
共 50 条
  • [1] Effective test generation using pre-trained Large Language Models and mutation testing
    Dakhel, Arghavan Moradi
    Nikanjam, Amin
    Majdinasab, Vahid
    Khomh, Foutse
    Desmarais, Michel C.
    INFORMATION AND SOFTWARE TECHNOLOGY, 2024, 171
  • [2] μBERT: Mutation Testing using Pre-Trained Language Models
    Degiovanni, Renzo
    Papadakis, Mike
    2022 IEEE 15TH INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION WORKSHOPS (ICSTW 2022), 2022, : 160 - 169
  • [3] Title and abstract screening for literature reviews using large language models: an exploratory study in the biomedical domain
    Dennstadt, Fabio
    Zink, Johannes
    Putora, Paul Martin
    Hastings, Janna
    Cihoric, Nikola
    SYSTEMATIC REVIEWS, 2024, 13 (01)
  • [4] An Exploratory Evaluation of Large Language Models Using Empirical Software Engineering Tasks
    Liang, Wenjun
    Xiao, Guanping
    PROCEEDINGS OF THE 15TH ASIA-PACIFIC SYMPOSIUM ON INTERNETWARE, INTERNETWARE 2024, 2024, : 31 - 40
  • [5] An Empirical Study on How Large Language Models Impact Software Testing Learning
    Mezzaro, Simone
    Gambi, Alessio
    Fraser, Gordon
    PROCEEDINGS OF 2024 28TH INTERNATION CONFERENCE ON EVALUATION AND ASSESSMENT IN SOFTWARE ENGINEERING, EASE 2024, 2024, : 555 - 564
  • [6] An Exploratory Study on How Non-Determinism in Large Language Models Affects Log Parsing
    Astekin, Merve
    Hort, Max
    Moonen, Leon
    PROCEEDINGS 2024 IEEE/ACM 2ND INTERNATIONAL WORKSHOP ON INTERPRETABILITY, ROBUSTNESS, AND BENCHMARKING IN NEURAL SOFTWARE ENGINEERING, INTENSE 2024, 2024, : 13 - 18
  • [7] Leveraging Cognitive Science for Testing Large Language Models
    Srinivasan, Ramya
    Inakoshi, Hiroya
    Uchino, Kanji
    2023 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE TESTING, AITEST, 2023, : 169 - 171
  • [8] Pipelines for Social Bias Testing of Large Language Models
    Nozza, Debora
    Bianchi, Federico
    Hovy, Dirk
    PROCEEDINGS OF WORKSHOP ON CHALLENGES & PERSPECTIVES IN CREATING LARGE LANGUAGE MODELS (BIGSCIENCE EPISODE #5), 2022, : 68 - 74
  • [9] A Survey of Testing Techniques Based on Large Language Models
    Qi, Fei
    Hou, Yingnan
    Lin, Ning
    Bao, Shanshan
    Xu, Nuo
    PROCEEDINGS OF 2024 INTERNATIONAL CONFERENCE ON COMPUTER AND MULTIMEDIA TECHNOLOGY, ICCMT 2024, 2024, : 280 - 284
  • [10] Testing theory of mind in large language models and humans
    Strachan, James W. A.
    Albergo, Dalila
    Borghini, Giulia
    Pansardi, Oriana
    Scaliti, Eugenio
    Gupta, Saurabh
    Saxena, Krati
    Rufo, Alessandro
    Panzeri, Stefano
    Manzi, Guido
    Graziano, Michael S. A.
    Becchio, Cristina
    NATURE HUMAN BEHAVIOUR, 2024, 8 (07): : 1285 - 1295