A graph-based interpretability method for deep neural networks

被引:5
|
作者
Wang, Tao [1 ,3 ]
Zheng, Xiangwei [1 ,3 ]
Zhang, Lifeng [1 ,3 ]
Cui, Zhen [2 ,3 ]
Xu, Chunyan [2 ,3 ]
机构
[1] Shandong Normal Univ, Sch Informat Sci & Engn, Jinan 250300, Peoples R China
[2] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing 210094, Peoples R China
[3] State Key Lab High End Server & Storage Technol, Jinan 250300, Peoples R China
基金
中国国家自然科学基金;
关键词
Graph; Graph generation; Association analysis; Deep neural networks; Interpretability;
D O I
10.1016/j.neucom.2023.126651
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the development of artificial intelligence, the most representative deep learning has been applied to various fields, which is greatly influencing human society. However, deep neural networks (DNNs) are still a black-box model, and the process how they make decisions internally is still difficult to understand and control. At the same time, DNNs take up more hardware resources, resulting in high energy consumption. Therefore, it is significant to study the characteristics of deep AI models and deeply understand the interactions between parameters within AI models so as to improve the interpretability of DNNs, optimize their structure and increase their computational efficiency. In this paper, we propose a graph-based interpretability method for deep neural networks (GIMDNN). The running parameters of DNNs are modeled as a graph by using a kernel function or the Graph Transformer Networks (GTN), where the nodes of the graph are obtained by dimensional mapping of the parameters of the DNNs, and the edges are calculated by the Gaussian kernel function. The generated graphs are classified by a graph convolutional network (GCN). The association relationship between the adjacent layers and the running mechanism of DNNs are analyzed, and the importance of the parameters of each layer in the DNNs for the final classification result can be obtained. Convolutional neural networks (CNNs) are one of the most representative models in DNNs. The proposed method is experimentally evaluated on the CNNs. The experimental results show that the proposed method can interpret the associations among the weight parameters as well as the correlation between two adjacent layers. Therefore, the DNNs for special tasks, such as portable applications, edge computing, and so on, can be customized, the number of parameters can be reduced. It is valuable to interpret the operation and principle of CNNs.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] Software bug prediction using graph neural networks and graph-based text representations
    Siachos, Ilias
    Kanakaris, Nikos
    Karacapilidis, Nikos
    Expert Systems with Applications, 2025, 259
  • [22] Graph-Based Thermal-Inertial SLAM With Probabilistic Neural Networks
    Saputra, Muhamad Risqi U.
    Lu, Chris Xiaoxuan
    de Gusmao, Pedro Porto Buarque
    Wang, Bing
    Markham, Andrew
    Trigoni, Niki
    IEEE TRANSACTIONS ON ROBOTICS, 2022, 38 (03) : 1875 - 1893
  • [23] Multi-Stream General and Graph-Based Deep Neural Networks for Skeleton-Based Sign Language Recognition
    Miah, Abu Saleh Musa
    Hasan, Md. Al Mehedi
    Jang, Si-Woong
    Lee, Hyoun-Sup
    Shin, Jungpil
    ELECTRONICS, 2023, 12 (13)
  • [24] Exploring Graph-Based Neural Networks for Automatic Brain Tumor Segmentation
    Saueressig, Camillo
    Berkley, Adam
    Kang, Elliot
    Munbodh, Reshma
    Singh, Ritambhara
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2021, 12611 LNCS : 18 - 37
  • [25] A Graph-based Brain Parcellation Method Extracting Sparse Networks
    Honnorat, Nicolas
    Eavani, Harini
    Satterthwaite, Theodore D.
    Davatzikos, Christos
    2013 3RD INTERNATIONAL WORKSHOP ON PATTERN RECOGNITION IN NEUROIMAGING (PRNI 2013), 2013, : 157 - 160
  • [26] A Graph-Based Approach to Interpreting Recurrent Neural Networks in Process Mining
    Hanga, Khadijah Muzzammil
    Kovalchuk, Yevgeniya
    Gaber, Mohamed Medhat
    IEEE ACCESS, 2020, 8 : 172923 - 172938
  • [27] Graph-based saliency and ensembles of convolutional neural networks for glaucoma detection
    Serte, Sertan
    Serener, Ali
    IET IMAGE PROCESSING, 2021, 15 (03) : 797 - 804
  • [28] Interpretability Analysis of Deep Neural Networks With Adversarial Examples
    Dong Y.-P.
    Su H.
    Zhu J.
    Zidonghua Xuebao/Acta Automatica Sinica, 2022, 48 (01): : 75 - 86
  • [29] Wind turbine anomaly detection and identification based on graph neural networks with decision interpretability
    Jiang, Guoqian
    Yi, Zichen
    He, Qun
    MEASUREMENT SCIENCE AND TECHNOLOGY, 2024, 35 (11)
  • [30] Global Concept-Based Interpretability for Graph Neural Networks via Neuron Analysis
    Han Xuanyuan
    Barbiero, Pietro
    Georgiev, Dobrik
    Magister, Lucie Charlotte
    Lio, Pietro
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 9, 2023, : 10675 - 10683