GRADIENTS AS A MEASURE OF UNCERTAINTY IN NEURAL NETWORKS

被引:0
|
作者
Lee, Jinsol [1 ]
Al Regib, Ghassan [1 ]
机构
[1] Georgia Inst Technol, OLIVES, Ctr Signal & Informat Proc, Sch Elect & Comp Engn, Atlanta, GA 30332 USA
关键词
gradients; uncertainty; unfamiliar input detection; out-of-distribution; image corruption/distortion;
D O I
10.1109/icip40778.2020.9190679
中图分类号
TB8 [摄影技术];
学科分类号
0804 ;
摘要
Despite tremendous success of modern neural networks, they are known to be overconfident even when the model encounters inputs with unfamiliar conditions. Detecting such inputs is vital to preventing models from making naive predictions that may jeopardize real-world applications of neural networks. In this paper, we address the challenging problem of devising a simple yet effective measure of uncertainty in deep neural networks. Specifically, we propose to utilize backpropagated gradients to quantify the uncertainty of trained models. Gradients depict the required amount of change for a model to properly represent given inputs, thus providing a valuable insight into how familiar and certain the model is regarding the inputs. We demonstrate the effectiveness of gradients as a measure of model uncertainty in applications of detecting unfamiliar inputs, including out-of-distribution and corrupted samples. We show that our gradient-based method outperforms state-of-the-art methods by up to 4.8% of AUROC score in out-of-distribution detection and 35.7% in corrupted input detection.
引用
收藏
页码:2416 / 2420
页数:5
相关论文
共 50 条
  • [31] Autoinverse: Uncertainty Aware Inversion of Neural Networks
    Ansari, Navid
    Seidel, Hans-Peter
    Ferdowsi, Nima Vahidi
    Babaei, Vahid
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [32] Stability analysis of Hopfield neural networks with uncertainty
    Liu, XZ
    Dickson, R
    MATHEMATICAL AND COMPUTER MODELLING, 2001, 34 (3-4) : 353 - 363
  • [33] Uncertainty Propagation through Deep Neural Networks
    Abdelaziz, Ahmed Hussen
    Watanabe, Shinji
    Hershey, John R.
    Vincent, Emanuel
    Kolossa, Dorothea
    16TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2015), VOLS 1-5, 2015, : 3561 - 3565
  • [34] Uncertainty Analysis Applied to Feedforward Neural Networks
    Hess, David E.
    Roddy, Robert F.
    Faller, William E.
    SHIP TECHNOLOGY RESEARCH, 2007, 54 (03) : 114 - +
  • [35] Magnitude and Uncertainty Pruning Criterion for Neural Networks
    Ko, Vinnie
    Oehmcke, Stefan
    Gieseke, Fabian
    2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, : 2317 - 2326
  • [36] Reliable neural networks for regression uncertainty estimation
    Tohme, Tony
    Vanslette, Kevin
    Youcef-Toumi, Kamal
    RELIABILITY ENGINEERING & SYSTEM SAFETY, 2023, 229
  • [37] Neural networks underlying the metacognitive uncertainty response
    Paul, Erick J.
    Smith, J. David
    Valentin, Vivian V.
    Turner, Benjamin O.
    Barbey, Aron K.
    Ashby, F. Gregory
    CORTEX, 2015, 71 : 306 - 322
  • [38] Modelling uncertainty in biomedical applications of neural networks
    Dorffner, G
    Sykacek, P
    Schittenkopf, C
    ARTIFICIAL NEURAL NETWORKS IN MEDICINE AND BIOLOGY, 2000, : 18 - 25
  • [39] Expressing uncertainty in neural networks for production systems
    Multaheb, Samim Ahmad
    Zimmering, Bernd
    Niggemann, Oliver
    AT-AUTOMATISIERUNGSTECHNIK, 2021, 69 (03) : 221 - 230
  • [40] CNOT-Measure Quantum Neural Networks
    Lukac, Martin
    Abdiyeva, Kamila
    Kameyama, Michitaka
    2018 IEEE 48TH INTERNATIONAL SYMPOSIUM ON MULTIPLE-VALUED LOGIC (ISMVL 2018), 2018, : 186 - 191