Reporting bias when using real data sets to analyze classification performance

被引:41
|
作者
Yousefi, Mohammadmahdi R. [1 ]
Hua, Jianping [2 ]
Sima, Chao [2 ]
Dougherty, Edward R. [1 ,2 ]
机构
[1] Texas A&M Univ, Dept Elect & Comp Engn, College Stn, TX 77843 USA
[2] Translat Genom Res Inst, Computat Biol Div, Phoenix, AZ 85004 USA
基金
美国国家科学基金会;
关键词
FEATURE-SELECTION; BREAST-CANCER; MOLECULAR CLASSIFICATION; EXPRESSION; VALIDATION; CARCINOMAS; SIGNATURES; SURVIVAL; LEUKEMIA;
D O I
10.1093/bioinformatics/btp605
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Motivation: It is commonplace for authors to propose a new classification rule, either the operator construction part or feature selection, and demonstrate its performance on real data sets, which often come from high-dimensional studies, such as from gene-expression microarrays, with small samples. Owing to the variability in feature selection and error estimation, individual reported performances are highly imprecise. Hence, if only the best test results are reported, then these will be biased relative to the overall performance of the proposed procedure. Results: This article characterizes reporting bias with several statistics and computes these statistics in a large simulation study using both modeled and real data. The results appear as curves giving the different reporting biases as functions of the number of samples tested when reporting only the best or second best performance. It does this for two classification rules, linear discriminant analysis (LDA) and 3-nearest-neighbor (3NN), and for filter and wrapper feature selection, t-test and sequential forward search. These were chosen on account of their well-studied properties and because they were amenable to the extremely large amount of processing required for the simulations. The results across all the experiments are consistent: there is generally large bias overriding what would be considered a significant performance differential, when reporting the best or second best performing data set. We conclude that there needs to be a database of data sets and that, for those studies depending on real data, results should be reported for all data sets in the database.
引用
收藏
页码:68 / 76
页数:9
相关论文
共 50 条
  • [1] Classification performance of various real-life data sets when the features are discretized
    Lynch, RS
    Willett, PK
    2001 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS, VOLS 1-5: E-SYSTEMS AND E-MAN FOR CYBERNETICS IN CYBERSPACE, 2002, : 753 - 758
  • [2] A gender bias in reporting expected ranks when performance feedback is at stake
    Barreda-Tarrazona, Ivan
    Garcia-Gallego, Aurora
    Garcia-Segarra, Jaume
    Ritschel, Alexander
    JOURNAL OF ECONOMIC PSYCHOLOGY, 2022, 90
  • [3] Predictive Model to Analyze Real and Synthetic Data for Learners' Performance Prediction Using Regression Techniques
    Shabnam, Aras S. J.
    Ramachandriah, Tanuja
    Haladappa, Manjula S.
    ONLINE LEARNING, 2025, 29 (01):
  • [4] Evaluation Measures of the Classification Performance of Imbalanced Data Sets
    Gu, Qiong
    Zhu, Li
    Cai, Zhihua
    COMPUTATIONAL INTELLIGENCE AND INTELLIGENT SYSTEMS, 2009, 51 : 461 - +
  • [5] Classification performance bias between training and test sets in a limited mammography dataset
    Hou, Rui
    Lo, Joseph Y.
    Marks, Jeffrey R.
    Hwang, E. Shelley
    Grimm, Lars J.
    PLOS ONE, 2024, 19 (02):
  • [6] PERFORMANCE OF MVTL OR-AND GATES WHEN DATA PRECEDES BIAS
    Meier, D. L.
    Przybysz, J. X.
    IEEE TRANSACTIONS ON APPLIED SUPERCONDUCTIVITY, 1993, 3 (01) : 2736 - 2739
  • [7] Nuanced Metrics for Measuring Unintended Bias with Real Data for Text Classification
    Borkan, Daniel
    Dixon, Lucas
    Sorensen, Jeffrey
    Thain, Nithum
    Vasserman, Lucy
    COMPANION OF THE WORLD WIDE WEB CONFERENCE (WWW 2019 ), 2019, : 491 - 500
  • [8] When POS data sets don't add up: Combatting sample bias
    Hovy, Dirk
    Plank, Barbara
    Sogaard, Anders
    LREC 2014 - NINTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2014, : 4472 - 4475
  • [9] Monotonic classification: An overview on algorithms, performance measures and data sets
    Cano, Jose-Ramon
    Antonio Gutierrez, Pedro
    Krawczyk, Bartosz
    Wozniak, Michal
    Garcia, Salvador
    NEUROCOMPUTING, 2019, 341 : 168 - 182
  • [10] Classification using small fuzzy biological data sets
    Diederich, J
    Fortuner, R
    1998 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS AT THE IEEE WORLD CONGRESS ON COMPUTATIONAL INTELLIGENCE - PROCEEDINGS, VOL 1-2, 1998, : 1429 - 1434