Uncertainty quantification metrics for deep regression

被引:0
|
作者
Lind, Simon Kristoffersson [1 ]
Xiong, Ziliang [2 ]
Forssen, Per-Erik [2 ]
Kruger, Volker [1 ]
机构
[1] Lund Univ LTH, Lund, Sweden
[2] Linkoping Univ, Linkoping, Sweden
基金
瑞典研究理事会;
关键词
Uncertainty; Evaluation; Metrics; Regression;
D O I
10.1016/j.patrec.2024.09.011
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
When deploying deep neural networks on robots or other physical systems, the learned model should reliably quantify predictive uncertainty. A reliable uncertainty allows downstream modules to reason about the safety of its actions. In this work, we address metrics for uncertainty quantification. Specifically, we focus on regression tasks, and investigate Area Under Sparsification Error (AUSE), Calibration Error (CE), Spearman's Rank Correlation, and Negative Log-Likelihood (NLL). Using multiple datasets, we look into how those metrics behave under four typical types of uncertainty, their stability regarding the size of the test set, and reveal their strengths and weaknesses. Our results indicate that Calibration Error is the most stable and interpretable metric, but AUSE and NLL also have their respective use cases. We discourage the usage of Spearman's Rank Correlation for evaluating uncertainties and recommend replacing it with AUSE.
引用
下载
收藏
页码:91 / 97
页数:7
相关论文
共 50 条
  • [21] Misclassification Risk and Uncertainty Quantification in Deep Classifiers
    Sensoy, Murat
    Saleki, Maryam
    Julier, Simon
    Aydogan, Reyhan
    Reid, John
    2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WACV 2021, 2021, : 2483 - 2491
  • [22] AutoDEUQ: Automated Deep Ensemble with Uncertainty Quantification
    Egele, Romain
    Maulik, Romit
    Raghavan, Krishnan
    Lusch, Bethany
    Guyon, Isabelle
    Balaprakash, Prasanna
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 1908 - 1914
  • [23] A hybrid data-driven-physics-constrained Gaussian process regression framework with deep kernel for uncertainty quantification
    Chang, Cheng
    Zeng, Tieyong
    JOURNAL OF COMPUTATIONAL PHYSICS, 2023, 486
  • [24] Uncertainty Quantification Metrics with Varying Statistical Information in Model Calibration and Validation
    Bi, Sifeng
    Prabhu, Saurabh
    Cogan, Scott
    Atamturktur, Sez
    AIAA JOURNAL, 2017, 55 (10) : 3570 - 3583
  • [25] Comparative studies of error metrics in variable fidelity model uncertainty quantification
    Hu, Jiexiang
    Yang, Yang
    Zhou, Qi
    Jiang, Ping
    Shao, Xinyu
    Shu, Leshi
    Zhang, Yahui
    JOURNAL OF ENGINEERING DESIGN, 2018, 29 (8-9) : 512 - 538
  • [26] Uncertain of uncertainties? A comparison of uncertainty quantification metrics for chemical data sets
    Maria H. Rasmussen
    Chenru Duan
    Heather J. Kulik
    Jan H. Jensen
    Journal of Cheminformatics, 15
  • [27] Uncertain of uncertainties? A comparison of uncertainty quantification metrics for chemical data sets
    Rasmussen, Maria H.
    Duan, Chenru
    Kulik, Heather J.
    Jensen, Jan H.
    JOURNAL OF CHEMINFORMATICS, 2023, 15 (01)
  • [28] Bayesian model updating using stochastic distances as uncertainty quantification metrics
    Bi, S.
    Broggi, M.
    Beer, M.
    Zhang, Y.
    PROCEEDINGS OF INTERNATIONAL CONFERENCE ON NOISE AND VIBRATION ENGINEERING (ISMA2018) / INTERNATIONAL CONFERENCE ON UNCERTAINTY IN STRUCTURAL DYNAMICS (USD2018), 2018, : 5157 - 5167
  • [29] Uncertainty quantification metrics with varying statistical information in model calibration and validation
    1600, AIAA International, 12700 Sunrise Valley Drive, Suite 200Reston, VA, Virginia, Virginia 20191-5807, United States (55):
  • [30] COMPARISON OF UNCERTAINTY QUANTIFICATION METHODS FOR CNN-BASED REGRESSION
    Wursthorn, K.
    Hillemann, M.
    Ulrich, M.
    XXIV ISPRS CONGRESS IMAGING TODAY, FORESEEING TOMORROW, COMMISSION II, 2022, 43-B2 : 721 - 728