An empirical study of automated unit test generation for Python

被引:0
|
作者
Stephan Lukasczyk
Florian Kroiß
Gordon Fraser
机构
[1] University of Passau,
来源
关键词
Dynamic typing; Python; Automated Test Generation;
D O I
暂无
中图分类号
学科分类号
摘要
Various mature automated test generation tools exist for statically typed programming languages such as Java. Automatically generating unit tests for dynamically typed programming languages such as Python, however, is substantially more difficult due to the dynamic nature of these languages as well as the lack of type information. Our Pynguin framework provides automated unit test generation for Python. In this paper, we extend our previous work on Pynguin to support more aspects of the Python language, and by studying a larger variety of well-established state of the art test-generation algorithms, namely DynaMOSA, MIO, and MOSA. Furthermore, we improved our Pynguin tool to generate regression assertions, whose quality we also evaluate. Our experiments confirm that evolutionary algorithms can outperform random test generation also in the context of Python, and similar to the Java world, DynaMOSA yields the highest coverage results. However, our results also demonstrate that there are still fundamental remaining issues, such as inferring type information for code without this information, currently limiting the effectiveness of test generation for Python.
引用
收藏
相关论文
共 50 条
  • [21] JS']JSEFT: Automated Java']JavaScript Unit Test Generation
    Mirshokraie, Shabnam
    Mesbah, Ali
    Pattabiraman, Karthik
    2015 IEEE 8TH INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION (ICST), 2015,
  • [22] JAOUT: Automated generation of aspect-oriented unit test
    Xu, GQ
    Yang, ZY
    Huang, HT
    Chen, Q
    Chen, L
    Xu, FB
    11TH ASIA-PACIFIC SOFTWARE ENGINEERING CONFERENCE, PROCEEDINGS, 2004, : 374 - 381
  • [23] Can the Generation of Test Cases for Unit Testing be Automated with Rules?
    Nalepa, Grzegorz J.
    Kutt, Krzysztof
    Kaczor, Krzysztof
    ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING, ICAISC 2014, PT II, 2014, 8468 : 536 - 547
  • [24] An empirical evaluation of evolutionary algorithms for unit test suite generation
    Campos, Jose
    Ge, Yan
    Albunian, Nasser
    Fraser, Gordon
    Eler, Marcelo
    Arcuri, Andrea
    INFORMATION AND SOFTWARE TECHNOLOGY, 2018, 104 : 207 - 235
  • [25] Leveraging Large Language Models for Python']Python Unit Test
    Jiri, Medlen
    Emese, Bari
    Medlen, Patrick
    2024 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE TESTING, AITEST, 2024, : 95 - 100
  • [26] Empirical Study of Python']Python Call Graph
    Li, Yu
    34TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING (ASE 2019), 2019, : 1274 - 1276
  • [27] An Empirical Study of Flaky Tests in Python']Python
    Gruber, Martin
    Lukasczyk, Stephan
    Krois, Florian
    Fraser, Gordon
    2021 14TH IEEE CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION (ICST 2021), 2021, : 148 - 158
  • [28] An Empirical Study on Bugs in Python']Python Interpreters
    Wang, Ziyuan
    Bu, Dexin
    Sun, Aiyue
    Gou, Shanyi
    Wang, Yong
    Chen, Lin
    IEEE TRANSACTIONS ON RELIABILITY, 2022, 71 (02) : 716 - 734
  • [29] Optimizing Search-Based Unit Test Generation with Large Language Models: An Empirical Study
    Xiao, Danni
    Guo, Yimeng
    Li, Yanhui
    Chen, Lin
    PROCEEDINGS OF THE 15TH ASIA-PACIFIC SYMPOSIUM ON INTERNETWARE, INTERNETWARE 2024, 2024, : 71 - 80
  • [30] KGEN: A Python']Python Tool for Automated Fortran Kernel Generation and Verification
    Kim, Youngsung
    Dennis, John
    Kerr, Christopher
    Kumar, Raghu Raj Prasanna
    Simha, Amogh
    Baker, Allison
    Mickelson, Sheri
    INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE 2016 (ICCS 2016), 2016, 80 : 1450 - 1460