GRADIENTS AS A MEASURE OF UNCERTAINTY IN NEURAL NETWORKS

被引:0
|
作者
Lee, Jinsol [1 ]
Al Regib, Ghassan [1 ]
机构
[1] Georgia Inst Technol, OLIVES, Ctr Signal & Informat Proc, Sch Elect & Comp Engn, Atlanta, GA 30332 USA
关键词
gradients; uncertainty; unfamiliar input detection; out-of-distribution; image corruption/distortion;
D O I
10.1109/icip40778.2020.9190679
中图分类号
TB8 [摄影技术];
学科分类号
0804 ;
摘要
Despite tremendous success of modern neural networks, they are known to be overconfident even when the model encounters inputs with unfamiliar conditions. Detecting such inputs is vital to preventing models from making naive predictions that may jeopardize real-world applications of neural networks. In this paper, we address the challenging problem of devising a simple yet effective measure of uncertainty in deep neural networks. Specifically, we propose to utilize backpropagated gradients to quantify the uncertainty of trained models. Gradients depict the required amount of change for a model to properly represent given inputs, thus providing a valuable insight into how familiar and certain the model is regarding the inputs. We demonstrate the effectiveness of gradients as a measure of model uncertainty in applications of detecting unfamiliar inputs, including out-of-distribution and corrupted samples. We show that our gradient-based method outperforms state-of-the-art methods by up to 4.8% of AUROC score in out-of-distribution detection and 35.7% in corrupted input detection.
引用
收藏
页码:2416 / 2420
页数:5
相关论文
共 50 条
  • [1] Correlated Parameters to Accurately Measure Uncertainty in Deep Neural Networks
    Posch, Konstantin
    Pilz, Juergen
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (03) : 1037 - 1051
  • [2] Sobolev gradients and neural networks
    Bastian, Michael R.
    Gunther, Jacob H.
    Moon, Todd K.
    2008 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, VOLS 1-12, 2008, : 2085 - 2088
  • [3] Efficiently Measure the Topologies of Large-Scale Networks Under the Guidance of Neural Network Gradients
    Fei, Gaolei
    Li, Zeyu
    Zhou, Yunpeng
    Zhai, Xuemeng
    Ye, Jian
    Hu, Guangmin
    IEEE Networking Letters, 2023, 5 (04): : 250 - 254
  • [4] Activated Gradients for Deep Neural Networks
    Liu, Mei
    Chen, Liangming
    Du, Xiaohao
    Jin, Long
    Shang, Mingsheng
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (04) : 2156 - 2168
  • [5] Weight Uncertainty in Neural Networks
    Blundell, Charles
    Cornebise, Julien
    Kavukcuoglu, Koray
    Wierstra, Daan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 37, 2015, 37 : 1613 - 1622
  • [6] On the uncertainty principle of neural networks
    Zhang, Jun-Jie
    Zhang, Dong-Xiao
    Chen, Jian-Nan
    Pang, Long-Gang
    Meng, Deyu
    ISCIENCE, 2025, 28 (04)
  • [7] Removing uncertainty in neural networks
    Tozzi, Arturo
    Peters, James F.
    COGNITIVE NEURODYNAMICS, 2020, 14 (03) : 339 - 345
  • [8] Removing uncertainty in neural networks
    Arturo Tozzi
    James F. Peters
    Cognitive Neurodynamics, 2020, 14 : 339 - 345
  • [9] An uncertainty importance measure of activities in PERT networks
    Cho, JG
    Yum, BJ
    INTERNATIONAL JOURNAL OF PRODUCTION RESEARCH, 1997, 35 (10) : 2737 - 2757
  • [10] Masked Training of Neural Networks with Partial Gradients
    Mohtashami, Amirkeivan
    Jaggi, Martin
    Stich, Sebastian U.
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151 : 5876 - 5890