A large-scale study on research code quality and execution

被引:0
|
作者
Ana Trisovic
Matthew K. Lau
Thomas Pasquier
Mercè Crosas
机构
[1] Harvard University,Institute for Quantitative Social Science
[2] Chinese Academy of Sciences,CAS Key Laboratory of Forest Ecology and Management, Institute of Applied Ecology
[3] University of British Columbia,Department of Computer Science
来源
关键词
D O I
暂无
中图分类号
学科分类号
摘要
This article presents a study on the quality and execution of research code from publicly-available replication datasets at the Harvard Dataverse repository. Research code is typically created by a group of scientists and published together with academic papers to facilitate research transparency and reproducibility. For this study, we define ten questions to address aspects impacting research reproducibility and reuse. First, we retrieve and analyze more than 2000 replication datasets with over 9000 unique R files published from 2010 to 2020. Second, we execute the code in a clean runtime environment to assess its ease of reuse. Common coding errors were identified, and some of them were solved with automatic code cleaning to aid code execution. We find that 74% of R files failed to complete without error in the initial execution, while 56% failed when code cleaning was applied, showing that many errors can be prevented with good coding practices. We also analyze the replication datasets from journals’ collections and discuss the impact of the journal policy strictness on the code re-execution rate. Finally, based on our results, we propose a set of recommendations for code dissemination aimed at researchers, journals, and repositories.
引用
收藏
相关论文
共 50 条
  • [11] LARGE-SCALE RESEARCH
    KORBMANN, R
    UMSCHAU IN WISSENSCHAFT UND TECHNIK, 1981, 81 (15) : 449 - 449
  • [12] A large-scale empirical study of code smells in JavaScript projects
    David Johannes
    Foutse Khomh
    Giuliano Antoniol
    Software Quality Journal, 2019, 27 : 1271 - 1314
  • [13] A Large-Scale Empirical Study on Code-Comment Inconsistencies
    Wen, Fengcai
    Nagy, Csaba
    Bavota, Gabriele
    Lanza, Michele
    2019 IEEE/ACM 27TH INTERNATIONAL CONFERENCE ON PROGRAM COMPREHENSION (ICPC 2019), 2019, : 53 - 64
  • [14] Research and Develompent Large-Scale Systems of Code Sequencies for SAW ID Tag
    Zhezherin, A. R.
    Chugunov, A. A.
    2018 WAVE ELECTRONICS AND ITS APPLICATION IN INFORMATION AND TELECOMMUNICATION SYSTEMS (WECONF), 2018,
  • [15] Automated parametric execution and documentation for large-scale simulations
    Kelsey, RL
    Bisset, KR
    Webster, RB
    ENABLING TECHNOLOGY FOR SIMULATION SCIENCE V, 2001, 4367 : 202 - 208
  • [16] A Large Scale Study of Multiple Programming Languages and Code Quality
    Kochhar, Pavneet Singh
    Wijedasa, Dinusha
    Lo, David
    2016 IEEE 23RD INTERNATIONAL CONFERENCE ON SOFTWARE ANALYSIS, EVOLUTION, AND REENGINEERING (SANER), VOL 1, 2016, : 563 - 573
  • [17] A Large Scale Study of Programming Languages and Code Quality in Github
    Ray, Baishakhi
    Posnett, Daryl
    Filkov, Vladimir
    Devanbu, Premkumar
    22ND ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING (FSE 2014), 2014, : 155 - 165
  • [18] Modeling Application Resilience in Large-scale Parallel Execution
    Wu, Kai
    Dong, Wenqian
    Guan, Qiang
    DeBardeleben, Nathan
    Li, Dong
    PROCEEDINGS OF THE 47TH INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, 2018,
  • [19] A Large-scale Study of Wikipedia Users' Quality of Experience
    Salutari, Flavia
    Da Hora, Diego
    Dubuc, Gilles
    Rossi, Dario
    WEB CONFERENCE 2019: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2019), 2019, : 3194 - 3200
  • [20] A large-scale empirical study of code smells in Java']JavaScript projects
    Johannes, David
    Khomh, Foutse
    Antoniol, Giuliano
    SOFTWARE QUALITY JOURNAL, 2019, 27 (03) : 1271 - 1314