On the Use of Evaluation Measures for Defect Prediction Studies

被引:14
|
作者
Moussa, Rebecca [1 ]
Sarro, Federica [1 ]
机构
[1] UCL, London, England
基金
欧洲研究理事会;
关键词
Software Defect Prediction; Evaluation Measures; STATIC CODE ATTRIBUTES;
D O I
10.1145/3533767.3534405
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Software defect prediction research has adopted various evaluation measures to assess the performance of prediction models. In this paper, we further stress on the importance of the choice of appropriate measures in order to correctly assess strengths and weaknesses of a given defect prediction model, especially given that most of the defect prediction tasks suffer from data imbalance. Investigating 111 previous studies published between 2010 and 2020, we found out that over a half either use only one evaluation measure, which alone cannot express all the characteristics of model performance in presence of imbalanced data, or a set of binary measures which are prone to be biased when used to assess models especially when trained with imbalanced data. We also unveil the magnitude of the impact of assessing popular defect prediction models with several evaluation measures based, for the first time, on both statistical significance test and effect size analyses. Our results reveal that the evaluation measures produce a different ranking of the classification models in 82% and 85% of the cases studied according to the Wilcoxon statistical significance test and (A) over cap (12) effect size, respectively. Further, we observe a very high rank disruption (between 64% to 92% on average) for each of the measures investigated. This signifies that, in the majority of the cases, a prediction technique that would be believed to be better than others when using a given evaluation measure becomes worse when using a different one. We conclude by providing some recommendations for the selection of appropriate evaluation measures based on factors which are specific to the problem at hand such as the class distribution of the training data, the way in which the model has been built and will be used. Moreover, we recommend to include in the set of evaluation measures, at least one able to capture the full picture of the confusion matrix, such as MCC. This will enable researchers to assess whether proposals made in previous work can be applied for purposes different than the ones they were originally intended for. Besides, we recommend to report, whenever possible, the raw confusion matrix to allow other researchers to compute any measure of interest thereby making it feasible to draw meaningful observations across different studies.
引用
收藏
页码:101 / 113
页数:13
相关论文
共 50 条
  • [21] How the Intended Use of Polygenic Risk Scores Guides the Design and Evaluation of Prediction Studies
    Martens, Forike K.
    Janssens, A. Cecile J. W.
    CURRENT EPIDEMIOLOGY REPORTS, 2019, 6 (02) : 184 - 190
  • [22] How the Intended Use of Polygenic Risk Scores Guides the Design and Evaluation of Prediction Studies
    Forike K. Martens
    A. Cecile J.W. Janssens
    Current Epidemiology Reports, 2019, 6 : 184 - 190
  • [23] Investigating The Use of Deep Neural Networks for Software Defect Prediction
    Samir, Mohamed
    El-Ramly, Mohammad
    Kamel, Amr
    2019 IEEE/ACS 16TH INTERNATIONAL CONFERENCE ON COMPUTER SYSTEMS AND APPLICATIONS (AICCSA 2019), 2019,
  • [24] Researcher Bias: The Use of Machine Learning in Software Defect Prediction
    Shepperd, Martin
    Bowes, David
    Hall, Tracy
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 2014, 40 (06) : 603 - 616
  • [25] An Empirical Study on the Use of Defect Prediction for Test Case Prioritization
    Paterson, David
    Campos, Jose
    Abreu, Rui
    Kapfhammer, Gregory M.
    Fraser, Gordon
    McMinn, Phil
    2019 IEEE 12TH CONFERENCE ON SOFTWARE TESTING, VALIDATION AND VERIFICATION (ICST 2019), 2019, : 346 - 357
  • [26] Empirical Evaluation of Mixed-Project Defect Prediction Models
    Turhan, Burak
    Tosun, Ayse
    Bener, Ayse
    2011 37TH EUROMICRO CONFERENCE ON SOFTWARE ENGINEERING AND ADVANCED APPLICATIONS (SEAA 2011), 2011, : 396 - 403
  • [27] MICROLOCAL DEFECT MEASURES
    GERARD, P
    COMMUNICATIONS IN PARTIAL DIFFERENTIAL EQUATIONS, 1991, 16 (11) : 1761 - 1794
  • [28] Production evaluation of automated reticle defect printability prediction application
    Howard, William B.
    Pomeroy, Scott
    Moses, Raphael
    Thaler, Thomas
    EMLC 2007: 23RD EUROPEAN MASK AND LITHOGRAPHY CONFERENCE, 2007, 6533
  • [29] The Use of Spatial Analysis Techniques in Defect and Nanostructure Studies
    Moram, M. A.
    Gabbai, U. E.
    Sadler, T. C.
    Kappers, M. J.
    Oliver, R. A.
    JOURNAL OF ELECTRONIC MATERIALS, 2010, 39 (06) : 656 - 662
  • [30] The Use of Spatial Analysis Techniques in Defect and Nanostructure Studies
    M.A. Moram
    U.E. Gabbai
    T.C. Sadler
    M.J. Kappers
    R.A. Oliver
    Journal of Electronic Materials, 2010, 39 : 656 - 662