Classifier loss under metric uncertainty

被引:0
|
作者
Skalak, David B. [1 ]
Niculescu-Mizil, Alexandru [2 ]
Caruana, Rich [2 ]
机构
[1] LLC, Highgate Predict, Ithaca, NY 14850 USA
[2] Cornell Univ, Ithaca, NY 14853 USA
来源
基金
美国国家科学基金会;
关键词
performance metric; evaluation; calibration; cross-metric;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Classifiers that are deployed in the field can be used and evaluated in ways that were not anticipated when the model was trained. The final evaluation metric may not have been known at training time, additional performance criteria may have been added, the evaluation metric may have changed over time, or the real-world evaluation procedure may have been impossible to simulate. Unforeseen ways of measuring model utility can degrade performance. Our objective is to provide experimental support for modelers who face potential "cross-metric" performance deterioration. First, to identify model-selection metrics that lead to stronger cross-metric performance, we characterize the expected loss where the selection metric is held fixed and the evaluation metric is varied. Second, we show that the number of data points evaluated by a selection metric has substantial impact on the optimal evaluation. While addressing these issues, we consider the effect of calibrating the classifiers to output probabilities influences. Our experiments show that if models are well calibrated, cross-entropy is the highest-performing selection metric if little data is available for model selection. With these experiments, modelers may be in a better position to choose selection metrics that are robust where it is uncertain what evaluation metric will be applied.
引用
收藏
页码:310 / +
页数:3
相关论文
共 50 条
  • [1] Classifier Risk Analysis under Bayesian Uncertainty Models
    Dalton, Lori A.
    [J]. 2013 ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS AND COMPUTERS, 2013, : 1395 - 1399
  • [2] Validation Metric for Dynamic System Responses under Uncertainty
    Xi, Zhimin
    Pan, Hao
    Fu, Yan
    Yang, Ren-Jye
    [J]. SAE INTERNATIONAL JOURNAL OF MATERIALS AND MANUFACTURING, 2015, 8 (02) : 309 - 314
  • [3] Metric Spaces Under Interval Uncertainty: Towards an Adequate Definition
    Afravi, Mahdokht
    Kreinovich, Vladik
    Dumrongpokaphoan, Thongchai
    [J]. ADVANCES IN COMPUTATIONAL INTELLIGENCE, MICAI 2016, PT I, 2017, 10061 : 219 - 227
  • [4] Approximate Stream Reasoning with Metric Temporal Logic under Uncertainty
    de Leng, Daniel
    Heintz, Fredrik
    [J]. THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 2760 - 2767
  • [5] Robust Metric Inequalities for Network Loading Under Demand Uncertainty
    Classen, Grit
    Koster, Arie M. C. A.
    Kutschka, Manuel
    Tahiri, Issam
    [J]. ASIA-PACIFIC JOURNAL OF OPERATIONAL RESEARCH, 2015, 32 (05)
  • [6] SIMPLE MINLOWSKI METRIC CLASSIFIER
    TOUSSAINT, GT
    [J]. IEEE TRANSACTIONS ON SYSTEMS SCIENCE AND CYBERNETICS, 1970, SSC6 (04): : 360 - +
  • [7] Metric tensor for Riemannian Classifier
    Wereszczynski, Kamil
    Michalczuk, Agnieszka
    Staniszewski, Michal
    Josinski, Henryk
    Switonski, Adam
    Wojciechowski, Konrad
    [J]. PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON NUMERICAL ANALYSIS AND APPLIED MATHEMATICS 2016 (ICNAAM-2016), 2017, 1863
  • [8] Combining accuracy and prior sensitivity for classifier design under prior uncertainty
    Landgrebe, Thomas
    Duin, Robert P. W.
    [J]. STRUCTURAL, SYNTACTIC, AND STATISTICAL PATTERN RECOGNITION, PROCEEDINGS, 2006, 4109 : 512 - 521
  • [9] Tax-loss harvesting under uncertainty
    McKeever, Daniel
    Rydqvist, Kristian
    [J]. JOURNAL OF BANKING & FINANCE, 2022, 140
  • [10] Estimating the uncertainty in the estimated mean area under the ROC curve of a classifier
    Yousef, WA
    Wagner, RF
    Loew, MH
    [J]. PATTERN RECOGNITION LETTERS, 2005, 26 (16) : 2600 - 2610