Metrics of Calibration for Probabilistic Predictions

被引:0
|
作者
Arrieta-Ibarra, Imanol [1 ]
Gujral, Paman [1 ]
Tannen, Jonathan [1 ]
Tygert, Mark [1 ]
Xu, Cherie [1 ]
机构
[1] Meta, 1 Facebook Way, Menlo Pk, CA 94025 USA
关键词
reliability diagram; calibration plot; cumulative differences; Kolmogorov-Smirnov; Kuiper;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Many predictions are probabilistic in nature; for example, a prediction could be for precipitation tomorrow, but with only a 30% chance. Given such probabilistic predictions together with the actual outcomes, "reliability diagrams" (also known as "calibration plots") help detect and diagnose statistically significant discrepancies-so-called "miscalibration"-between the predictions and the outcomes. The canonical reliability diagrams are based on histogramming the observed and expected values of the predictions; replacing the hard histogram binning with soft kernel density estimation using smooth convolutional kernels is another common practice. But, which widths of bins or kernels are best? Plots of the cumulative differences between the observed and expected values largely avoid this question, by displaying miscalibration directly as the slopes of secant lines for the graphs. Slope is easy to perceive with quantitative precision, even when the constant offsets of the secant lines are irrelevant; there is no need to bin or perform kernel density estimation. The existing standard metrics of miscalibration each summarize a reliability diagram into a single scalar statistic. The cumulative plots naturally lead to scalar metrics for the deviation of the graph of cumulative differences away from zero; good calibration corresponds to a horizontal, flat graph which deviates little from zero. The cumulative approach is currently unconventional, yet offers many favorable statistical properties, guaranteed via mathematical theory backed by rigorous proofs and illustrative numerical examples. In particular, metrics based on binning or kernel density estimation unavoidably must trade-off statistical confidence for the ability to resolve variations as a function of the predicted probability or vice versa. Widening the bins or kernels averages away random noise while giving up some resolving power. Narrowing the bins or kernels enhances resolving power while not averaging away as much noise. The cumulative methods do not impose such an explicit trade-off. Considering these results, practitioners probably should adopt the cumulative approach as a standard for best practices.
引用
收藏
页数:54
相关论文
共 50 条
  • [1] Metrics of Calibration for Probabilistic Predictions
    Arrieta-Ibarra, Imanol
    Gujral, Paman
    Tannen, Jonathan
    Tygert, Mark
    Xu, Cherie
    Journal of Machine Learning Research, 2022, 23
  • [2] Validation and calibration of probabilistic predictions in ecology
    Chivers, Corey
    Leung, Brian
    Yan, Norman D.
    METHODS IN ECOLOGY AND EVOLUTION, 2014, 5 (10): : 1023 - 1032
  • [3] A Score Regression Approach to Assess Calibration of Continuous Probabilistic Predictions
    Held, L.
    Rufibach, K.
    Balabdaoui, F.
    BIOMETRICS, 2010, 66 (04) : 1295 - 1305
  • [4] METRICS,PROBABILISTIC METRICS AND RANDOM METRICS
    GUO Tiexin(Department of Mathematics
    SystemsScienceandMathematicalSciences, 1995, (02) : 182 - 186
  • [5] Calibration of Machine Learning-Based Probabilistic Hail Predictions for Operational Forecasting
    Burke, Amanda
    Snook, Nathan
    Gagne, David John, II
    Mccorkle, Sarah
    Mcgovern, Amy
    WEATHER AND FORECASTING, 2020, 35 (01) : 149 - 168
  • [6] Field-aware Calibration: A Simple and Empirically Strong Method for Reliable Probabilistic Predictions
    Pan, Feiyang
    Ao, Xiang
    Tang, Pingzhong
    Lu, Min
    Liu, Dapeng
    Xiao, Lei
    He, Qing
    WEB CONFERENCE 2020: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2020), 2020, : 729 - 739
  • [7] Metrics for Probabilistic Geometries
    Tosi, Alessandra
    Hauberg, Soren
    Vellido, Alfredo
    Lawrence, Neil D.
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2014, : 800 - 808
  • [8] VALIDATION OF PROBABILISTIC PREDICTIONS
    MILLER, ME
    LANGEFELD, CD
    TIERNEY, WM
    HUI, SL
    MCDONALD, CJ
    MEDICAL DECISION MAKING, 1993, 13 (01) : 49 - 58
  • [9] ON COMPOUND PROBABILISTIC METRICS WITH GIVEN MINIMAL METRICS
    ZUBKOV, AM
    LECTURE NOTES IN MATHEMATICS, 1985, 1155 : 443 - 447
  • [10] Calibration of predictions in regression
    Copas, JB
    Krebs-Brown, AJ
    COMMUNICATIONS IN STATISTICS-THEORY AND METHODS, 2000, 29 (9-10) : 1973 - 1986