Interpretability for Neural Networks from the Perspective of Probability Density

被引:0
|
作者
Lu, Lu [1 ]
Pan, Tingting [1 ]
Zhao, Junhong [1 ]
Yang, Jie [1 ]
机构
[1] Dalian Univ Technol, Sch Math Sci, Dalian, Peoples R China
基金
中国国家自然科学基金;
关键词
neural networks; interpretability; probability density; gaussian distribution;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Currently, most of works about interpretation of neural networks are to visually explain the features learned by hidden layers. This paper explores the relationship between the input units and the output units of neural network from the perspective of probability density. For classification problems, it shows that the probability density function (PDF) of the output unit can be expressed as a mixture of three Gaussian density functions whose mean and variance are related to the information of the input units, under the assumption that the input units are independent of each other and obey a Gaussian distribution. The experimental results show that the theoretical distribution of the output unit is basically consistent with the actual distribution.
引用
收藏
页码:1502 / 1507
页数:6
相关论文
共 50 条
  • [41] MonoNet: enhancing interpretability in neural networks via monotonic features
    Nguyen, An-Phi
    Moreno, Dana Lea
    Le-Bel, Nicolas
    Martinez, Maria Rodriguez
    [J]. BIOINFORMATICS ADVANCES, 2023, 3 (01):
  • [42] UNCERTAINTY MODELING AND INTERPRETABILITY IN CONVOLUTIONAL NEURAL NETWORKS FOR POLYP SEGMENTATION
    Wickstrom, Kristoffer
    Kampffmeyer, Michael
    Jenssen, Robert
    [J]. 2018 IEEE 28TH INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2018,
  • [43] Interpretability of Neural Networks Based on Game-theoretic Interactions
    Zhou, Huilin
    Ren, Jie
    Deng, Huiqi
    Cheng, Xu
    Zhang, Jinpeng
    Zhang, Quanshi
    [J]. MACHINE INTELLIGENCE RESEARCH, 2024, 21 (04) : 718 - 739
  • [44] Reliable interpretability of biology-inspired deep neural networks
    Wolfgang Esser-Skala
    Nikolaus Fortelny
    [J]. npj Systems Biology and Applications, 9
  • [45] Batch-wise Regularization of Deep Neural Networks for Interpretability
    Burkart, Nadia
    Faller, Philipp M.
    Peinsipp, Elisabeth
    Huber, Marco F.
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON MULTISENSOR FUSION AND INTEGRATION FOR INTELLIGENT SYSTEMS (MFI), 2020, : 216 - 222
  • [46] On the Interpretability of Regularisation for Neural Networks Through Model Gradient Similarity
    Szolnoky, Vincent
    Andersson, Viktor
    Kulcsar, Balazs
    Jornsten, Rebecka
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [47] Reliable interpretability of biology-inspired deep neural networks
    Esser-Skala, Wolfgang
    Fortelny, Nikolaus
    [J]. NPJ SYSTEMS BIOLOGY AND APPLICATIONS, 2023, 9 (01)
  • [48] Functional network: A novel framework for interpretability of deep neural networks
    Zhang, Ben
    Dong, Zhetong
    Zhang, Junsong
    Lin, Hongwei
    [J]. NEUROCOMPUTING, 2023, 519 : 94 - 103
  • [49] Towards Robust Interpretability with Self-Explaining Neural Networks
    Alvarez-Melis, David
    Jaakkola, Tommi S.
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [50] A detailed analysis of the interpretability of Convolutional Neural Networks for text classification
    Gimenez, Maite
    Fabregat-Hernandez, Ares
    Fabra-Boluda, Raul
    Palanca, Javier
    Botti, Vicent
    [J]. LOGIC JOURNAL OF THE IGPL, 2024,