GRADIENTS AS A MEASURE OF UNCERTAINTY IN NEURAL NETWORKS

被引:0
|
作者
Lee, Jinsol [1 ]
Al Regib, Ghassan [1 ]
机构
[1] Georgia Inst Technol, OLIVES, Ctr Signal & Informat Proc, Sch Elect & Comp Engn, Atlanta, GA 30332 USA
关键词
gradients; uncertainty; unfamiliar input detection; out-of-distribution; image corruption/distortion;
D O I
10.1109/icip40778.2020.9190679
中图分类号
TB8 [摄影技术];
学科分类号
0804 ;
摘要
Despite tremendous success of modern neural networks, they are known to be overconfident even when the model encounters inputs with unfamiliar conditions. Detecting such inputs is vital to preventing models from making naive predictions that may jeopardize real-world applications of neural networks. In this paper, we address the challenging problem of devising a simple yet effective measure of uncertainty in deep neural networks. Specifically, we propose to utilize backpropagated gradients to quantify the uncertainty of trained models. Gradients depict the required amount of change for a model to properly represent given inputs, thus providing a valuable insight into how familiar and certain the model is regarding the inputs. We demonstrate the effectiveness of gradients as a measure of model uncertainty in applications of detecting unfamiliar inputs, including out-of-distribution and corrupted samples. We show that our gradient-based method outperforms state-of-the-art methods by up to 4.8% of AUROC score in out-of-distribution detection and 35.7% in corrupted input detection.
引用
收藏
页码:2416 / 2420
页数:5
相关论文
共 50 条
  • [21] Application of varentropy as a measure of probabilistic uncertainty for complex networks
    Jiang Jian
    Wang Ru
    Pezeril, Michel
    Wang, Qiuping Alexandre
    CHINESE SCIENCE BULLETIN, 2011, 56 (34): : 3677 - 3682
  • [22] Application of varentropy as a measure of probabilistic uncertainty for complex networks
    PEZERIL Michel
    WANG Qiuping Alexandre
    Science Bulletin, 2011, (34) : 3677 - 3682
  • [23] DRAGONN: Distributed Randomized Approximate Gradients of Neural Networks
    Wang, Zhuang
    Xu, Zhaozhuo
    Wu, Xinyu Crystal
    Shrivastava, Anshumali
    Ng, T. S. Eugene
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [24] Favorable Random Gradients for Optimization of Deep Neural Networks
    Savchenko, Yehor
    Zavalnyi, Oleksandr
    2018 11TH INTERNATIONAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DESIGN (ISCID), VOL 1, 2018, : 351 - 355
  • [25] Combining Gradients and Probabilities for Heterogeneous Approximation of Neural Networks
    Trommer, Elias
    Waschneck, Bernd
    Kumar, Akash
    2022 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER AIDED DESIGN, ICCAD, 2022,
  • [26] The value of measuring uncertainty in neural networks in dermoscopy
    Van Molle, Pieter
    Brochez, Lieve
    Verbelen, Tim
    De Boom, Cedric
    Vankeirsbilck, Bert
    Verhaeghe, Evelien
    Mylle, Sofie
    Simoens, Pieter
    Dhoedt, Bart
    JOURNAL OF THE AMERICAN ACADEMY OF DERMATOLOGY, 2022, 87 (05) : 1191 - 1193
  • [27] Handling uncertainty in neural networks: An interval approach
    Simoff, SJ
    ICNN - 1996 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS, VOLS. 1-4, 1996, : 606 - 610
  • [28] Uncertainty-aware Binary Neural Networks
    Zhao, Junhe
    Yang, Linlin
    Zhang, Baochang
    Guo, Guodong
    Doermann, David
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 3441 - 3447
  • [29] Uncertainty in Neural Networks: Approximately Bayesian Ensembling
    Pearce, Tim
    Leibfried, Felix
    Brintrup, Alexandra
    Zaki, Mohamed
    Neely, Andy
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 234 - 243
  • [30] Uncertainty quantification for predictions of atomistic neural networks
    Vazquez-Salazar, Luis Itza
    Boittier, Eric D.
    Meuwly, Markus
    CHEMICAL SCIENCE, 2022, 13 (44) : 13068 - 13084