On the Use of Evaluation Measures for Defect Prediction Studies

被引:14
|
作者
Moussa, Rebecca [1 ]
Sarro, Federica [1 ]
机构
[1] UCL, London, England
基金
欧洲研究理事会;
关键词
Software Defect Prediction; Evaluation Measures; STATIC CODE ATTRIBUTES;
D O I
10.1145/3533767.3534405
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Software defect prediction research has adopted various evaluation measures to assess the performance of prediction models. In this paper, we further stress on the importance of the choice of appropriate measures in order to correctly assess strengths and weaknesses of a given defect prediction model, especially given that most of the defect prediction tasks suffer from data imbalance. Investigating 111 previous studies published between 2010 and 2020, we found out that over a half either use only one evaluation measure, which alone cannot express all the characteristics of model performance in presence of imbalanced data, or a set of binary measures which are prone to be biased when used to assess models especially when trained with imbalanced data. We also unveil the magnitude of the impact of assessing popular defect prediction models with several evaluation measures based, for the first time, on both statistical significance test and effect size analyses. Our results reveal that the evaluation measures produce a different ranking of the classification models in 82% and 85% of the cases studied according to the Wilcoxon statistical significance test and (A) over cap (12) effect size, respectively. Further, we observe a very high rank disruption (between 64% to 92% on average) for each of the measures investigated. This signifies that, in the majority of the cases, a prediction technique that would be believed to be better than others when using a given evaluation measure becomes worse when using a different one. We conclude by providing some recommendations for the selection of appropriate evaluation measures based on factors which are specific to the problem at hand such as the class distribution of the training data, the way in which the model has been built and will be used. Moreover, we recommend to include in the set of evaluation measures, at least one able to capture the full picture of the confusion matrix, such as MCC. This will enable researchers to assess whether proposals made in previous work can be applied for purposes different than the ones they were originally intended for. Besides, we recommend to report, whenever possible, the raw confusion matrix to allow other researchers to compute any measure of interest thereby making it feasible to draw meaningful observations across different studies.
引用
收藏
页码:101 / 113
页数:13
相关论文
共 50 条
  • [31] AN EVALUATION OF THE USE OF DIFFERENCE SCORES IN PREDICTION
    Wittenborn, J. R.
    JOURNAL OF CLINICAL PSYCHOLOGY, 1951, 7 (02) : 108 - 111
  • [32] USE OF CONSENSUS - PREDICTION, ATTRIBUTION AND EVALUATION
    LOWE, CA
    KASSIN, SM
    PERSONALITY AND SOCIAL PSYCHOLOGY BULLETIN, 1977, 3 (04) : 616 - 619
  • [33] Personality and performance-based measures in the prediction of alcohol use
    Skeel, Reid L.
    Pilarski, Carrie
    Pytlak, Kimberley
    Neudecker, John
    PSYCHOLOGY OF ADDICTIVE BEHAVIORS, 2008, 22 (03) : 402 - 409
  • [34] Semi-classical measures and defect measures
    Burq, N
    ASTERISQUE, 1997, (245) : 167 - 195
  • [35] Evaluation and Performance Prediction for Asphalt Pavement Preventive Conservation Measures
    Yang Yan Hai
    Zhang Huai Zhi
    Wang Bo
    ARCHITECTURE AND BUILDING MATERIALS, PTS 1 AND 2, 2011, 99-100 : 308 - +
  • [36] Performance Measures for Prediction Models and Markers: Evaluation of Predictions and Classifications
    Steyerberg, Ewout W.
    Van Calster, Ben
    Pencina, Michael J.
    REVISTA ESPANOLA DE CARDIOLOGIA, 2011, 64 (09): : 788 - 794
  • [37] Evaluation of Group Fairness Measures in Student Performance Prediction Problems
    Tai Le Quy
    Thi Huyen Nguyen
    Friege, Gunnar
    Ntoutsi, Eirini
    MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2022, PT I, 2023, 1752 : 119 - 136
  • [38] The use of synchronization measures in studies of plant reproductive phenology
    Bolmgren, K
    OIKOS, 1998, 82 (02) : 411 - 415
  • [39] REFERENCE MEASURES FOR USE IN STUDIES OF PLANT WATER NEEDS
    KATERJI, N
    HALLAIRE, M
    AGRONOMIE, 1984, 4 (10): : 999 - 1008
  • [40] THE USE OF REPEATED MEASURES ANALYSES IN DEVELOPMENTAL TOXICOLOGY STUDIES
    TAMURA, RN
    BUELKESAM, J
    NEUROTOXICOLOGY AND TERATOLOGY, 1992, 14 (03) : 205 - 210