Classifier loss under metric uncertainty

被引:0
|
作者
Skalak, David B. [1 ]
Niculescu-Mizil, Alexandru [2 ]
Caruana, Rich [2 ]
机构
[1] LLC, Highgate Predict, Ithaca, NY 14850 USA
[2] Cornell Univ, Ithaca, NY 14853 USA
来源
基金
美国国家科学基金会;
关键词
performance metric; evaluation; calibration; cross-metric;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Classifiers that are deployed in the field can be used and evaluated in ways that were not anticipated when the model was trained. The final evaluation metric may not have been known at training time, additional performance criteria may have been added, the evaluation metric may have changed over time, or the real-world evaluation procedure may have been impossible to simulate. Unforeseen ways of measuring model utility can degrade performance. Our objective is to provide experimental support for modelers who face potential "cross-metric" performance deterioration. First, to identify model-selection metrics that lead to stronger cross-metric performance, we characterize the expected loss where the selection metric is held fixed and the evaluation metric is varied. Second, we show that the number of data points evaluated by a selection metric has substantial impact on the optimal evaluation. While addressing these issues, we consider the effect of calibrating the classifiers to output probabilities influences. Our experiments show that if models are well calibrated, cross-entropy is the highest-performing selection metric if little data is available for model selection. With these experiments, modelers may be in a better position to choose selection metrics that are robust where it is uncertain what evaluation metric will be applied.
引用
收藏
页码:310 / +
页数:3
相关论文
共 50 条
  • [21] Model Validation Metric and Model Bias Characterization for Dynamic System Responses under Uncertainty
    Xi, Zhimin
    Fu, Yan
    Yang, Ren-Jye
    [J]. PROCEEDINGS OF THE ASME INTERNATIONAL DESIGN ENGINEERING TECHNICAL CONFERENCES AND COMPUTERS AND INFORMATION IN ENGINEERING CONFERENCE 2012, VOL 3, PTS A AND B, 2012, : 1249 - +
  • [22] Multi-Metric Validation Under Uncertainty for Multivariate Model Outputs and Limited Measurements
    White, Andrew
    Mahadevan, Sankaran
    Schmucker, Jason
    Karl, Alexander
    [J]. JOURNAL OF VERIFICATION, VALIDATION AND UNCERTAINTY QUANTIFICATION, 2022, 7 (04):
  • [23] Kernel Classifier with Correntropy Loss
    Pokharel, Rosha
    Principe, Jose. C.
    [J]. 2012 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2012,
  • [24] Optimal saving rules for loss-averse agents under uncertainty
    Siegmann, A
    [J]. ECONOMICS LETTERS, 2002, 77 (01) : 27 - 34
  • [25] REPRESENTATION OF FARMERS BEHAVIOR UNDER UNCERTAINTY WITH A FOCUS-LOSS CONSTRAINT
    BOUSSARD, JM
    PETIT, M
    [J]. JOURNAL OF FARM ECONOMICS, 1967, 49 (04): : 869 - &
  • [26] Power Loss-Aware Transactive Microgrid Coalitions under Uncertainty
    Sadeghi, Mohammad
    Mollahasani, Shahram
    Erol-Kantarci, Melike
    [J]. ENERGIES, 2020, 13 (21)
  • [27] Metric learning-guidedknearest neighbor multilabel classifier
    Ma, Jiajun
    Zhou, Shuisheng
    [J]. NEURAL COMPUTING & APPLICATIONS, 2021, 33 (07): : 2411 - 2425
  • [28] Performance Metric Elicitation from Pairwise Classifier Comparisons
    Hiranandani, Gaurush
    Boodaghians, Shant
    Mehta, Ruta
    Koyejo, Oluwasanmi
    [J]. 22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89 : 371 - 379
  • [29] SIMILARITY METRIC LEARNING FOR A VARIABLE-KERNEL CLASSIFIER
    LOWE, DG
    [J]. NEURAL COMPUTATION, 1995, 7 (01) : 72 - 85
  • [30] QUANTUM FISHER METRIC AND UNCERTAINTY RELATIONS
    CAIANIELLO, ER
    GUZ, W
    [J]. PHYSICS LETTERS A, 1988, 126 (04) : 223 - 225