GRADIENTS AS A MEASURE OF UNCERTAINTY IN NEURAL NETWORKS

被引:0
|
作者
Lee, Jinsol [1 ]
Al Regib, Ghassan [1 ]
机构
[1] Georgia Inst Technol, OLIVES, Ctr Signal & Informat Proc, Sch Elect & Comp Engn, Atlanta, GA 30332 USA
关键词
gradients; uncertainty; unfamiliar input detection; out-of-distribution; image corruption/distortion;
D O I
10.1109/icip40778.2020.9190679
中图分类号
TB8 [摄影技术];
学科分类号
0804 ;
摘要
Despite tremendous success of modern neural networks, they are known to be overconfident even when the model encounters inputs with unfamiliar conditions. Detecting such inputs is vital to preventing models from making naive predictions that may jeopardize real-world applications of neural networks. In this paper, we address the challenging problem of devising a simple yet effective measure of uncertainty in deep neural networks. Specifically, we propose to utilize backpropagated gradients to quantify the uncertainty of trained models. Gradients depict the required amount of change for a model to properly represent given inputs, thus providing a valuable insight into how familiar and certain the model is regarding the inputs. We demonstrate the effectiveness of gradients as a measure of model uncertainty in applications of detecting unfamiliar inputs, including out-of-distribution and corrupted samples. We show that our gradient-based method outperforms state-of-the-art methods by up to 4.8% of AUROC score in out-of-distribution detection and 35.7% in corrupted input detection.
引用
收藏
页码:2416 / 2420
页数:5
相关论文
共 50 条
  • [41] Significance measure of local cluster neural networks
    Eickhoff, Ralf
    Sitte, Joaquin
    2007 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-6, 2007, : 172 - +
  • [42] An empirical measure of element contribution in neural networks
    Mak, B
    Blanning, RW
    IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART C-APPLICATIONS AND REVIEWS, 1998, 28 (04): : 561 - 564
  • [43] MEASURE VALUED DIFFERENTIATION FOR STOCHASTIC NEURAL NETWORKS
    Flynn, Thomas
    2017 WINTER SIMULATION CONFERENCE (WSC), 2017, : 4622 - 4623
  • [44] A VOLATILITY MEASURE FOR ANNEALING IN FEEDBACK NEURAL NETWORKS
    ALSPECTOR, J
    ZEPPENFELD, T
    LUNA, S
    NEURAL COMPUTATION, 1992, 4 (02) : 191 - 195
  • [45] A Metric to Measure Contribution of Nodes in Neural Networks
    Jung, Jay Hoon
    Shin, Yousun
    Kwon, Youngmin
    2019 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2019), 2019, : 1508 - 1515
  • [46] Hamiltonian Deep Neural Networks Guaranteeing Nonvanishing Gradients by Design
    Galimberti, Clara Lucia
    Furieri, Luca
    Xu, Liang
    Ferrari-Trecate, Giancarlo
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2023, 68 (05) : 3155 - 3162
  • [47] Training Neural Networks Without Gradients: A Scalable ADMM Approach
    Taylor, Gavin
    Burmeister, Ryan
    Xu, Zheng
    Singh, Bharat
    Patel, Ankit
    Goldstein, Tom
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 48, 2016, 48
  • [48] Exploring Node Classification Uncertainty in Graph Neural Networks
    Islam, Md. Farhadul
    Zabeen, Sarah
    Bin Rahman, Fardin
    Islam, Md. Azharul
    Bin Kibria, Fahmid
    Manab, Meem Arafat
    Karim, Dewan Ziaul
    Rasel, Annajiat Alim
    PROCEEDINGS OF THE 2023 ACM SOUTHEAST CONFERENCE, ACMSE 2023, 2023, : 186 - 190
  • [49] Impulsive stabilization of delayed neural networks, with and without uncertainty
    Li, Chuandong
    Liao, Xiaofeng
    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, 2007, 17 (16) : 1489 - 1502
  • [50] Discovering uncertainty: Bayesian constitutive artificial neural networks
    Linka, Kevin
    Holzapfel, Gerhard A.
    Kuhl, Ellen
    COMPUTER METHODS IN APPLIED MECHANICS AND ENGINEERING, 2025, 433