An empirical study of automated unit test generation for Python

被引:0
|
作者
Stephan Lukasczyk
Florian Kroiß
Gordon Fraser
机构
[1] University of Passau,
来源
关键词
Dynamic typing; Python; Automated Test Generation;
D O I
暂无
中图分类号
学科分类号
摘要
Various mature automated test generation tools exist for statically typed programming languages such as Java. Automatically generating unit tests for dynamically typed programming languages such as Python, however, is substantially more difficult due to the dynamic nature of these languages as well as the lack of type information. Our Pynguin framework provides automated unit test generation for Python. In this paper, we extend our previous work on Pynguin to support more aspects of the Python language, and by studying a larger variety of well-established state of the art test-generation algorithms, namely DynaMOSA, MIO, and MOSA. Furthermore, we improved our Pynguin tool to generate regression assertions, whose quality we also evaluate. Our experiments confirm that evolutionary algorithms can outperform random test generation also in the context of Python, and similar to the Java world, DynaMOSA yields the highest coverage results. However, our results also demonstrate that there are still fundamental remaining issues, such as inferring type information for code without this information, currently limiting the effectiveness of test generation for Python.
引用
收藏
相关论文
共 50 条
  • [1] An empirical study of automated unit test generation for Python']Python
    Lukasczyk, Stephan
    Kroiss, Florian
    Fraser, Gordon
    EMPIRICAL SOFTWARE ENGINEERING, 2023, 28 (02)
  • [2] Pynguin: Automated Unit Test Generation for Python']Python
    Lukasczyk, Stephan
    Fraser, Gordon
    2022 ACM/IEEE 44TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: COMPANION PROCEEDINGS (ICSE-COMPANION 2022), 2022, : 168 - 172
  • [3] Pynguin: Automated Unit Test Generation for Python
    Lukasczyk, Stephan
    Fraser, Gordon
    arXiv, 2022,
  • [4] Using GitHub Copilot for Test Generation in Python']Python: An Empirical Study
    El Haji, Khalid
    Brandt, Carolin
    Zaidman, Andy
    PROCEEDINGS OF THE 2024 IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATION OF SOFTWARE TEST, AST 2024, 2024, : 45 - 55
  • [5] Does Automated Unit Test Generation Really Help Software Testers? A Controlled Empirical Study
    Fraser, Gordon
    Staats, Matt
    McMinn, Phil
    Arcuri, Andrea
    Padberg, Frank
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2015, 24 (04)
  • [6] An Empirical Evaluation of Using Large Language Models for Automated Unit Test Generation
    Schafer, Max
    Nadi, Sarah
    Eghbali, Aryaz
    Tip, Frank
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2024, 50 (01) : 85 - 105
  • [7] Using Relative Lines of Code to Guide Automated Test Generation for Python']Python
    Holmes, Josie
    Ahmed, Iftekhar
    Brindescu, Caius
    Gopinath, Rahul
    Zhang, He
    Groce, Alex
    ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2020, 29 (04)
  • [8] Are there any Unit Tests? An Empirical Study on Unit Testing in Open Source Python']Python Projects
    Trautsch, Fabian
    Grabowski, Jens
    2017 10TH IEEE INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION (ICST), 2017, : 207 - 218
  • [9] NxtUnit: Automated Unit Test Generation for Go
    Wang, Siwei
    Mao, Xue
    Cao, Ziguang
    Gao, Yujun
    Shen, Qucheng
    Peng, Chao
    27TH INTERNATIONAL CONFERENCE ON EVALUATION AND ASSESSMENT IN SOFTWARE ENGINEERING, EASE 2023, 2023, : 176 - 179
  • [10] Automated Unit Test Generation for Evolving Software
    Shamshiri, Sina
    2015 10TH JOINT MEETING OF THE EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND THE ACM SIGSOFT SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING (ESEC/FSE 2015) PROCEEDINGS, 2015, : 1038 - 1041