A Demonstration of Interpretability Methods for Graph Neural Networks

被引:1
|
作者
Mobaraki, Ehsan B. [1 ]
Khan, Arijit [1 ]
机构
[1] Aalborg Univ, Aalborg, Denmark
关键词
Graph neural network; interpretability; explainable AI;
D O I
10.1145/3594778.3594880
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph neural networks (GNNs) are widely used in many downstream applications, such as graphs and nodes classification, entity resolution, link prediction, and question answering. Several interpretability methods for GNNs have been proposed recently. However, since they have not been thoroughly compared with each other, their trade-offs and efficiency in the context of underlying GNNs and downstream applications are unclear. To support more research in this domain, we develop an end-to-end interactive tool, named gInterpreter, by re-implementing 15 recent GNN interpretability methods in a common environment on top of a number of state-of-the-art GNNs employed for different downstream tasks. This paper demonstrates gInterpreter with an interactive performance profiling of 15 recent GNN interpretability methods, aiming to explain the complex deep learning pipelines over graph-structured data.
引用
收藏
页数:5
相关论文
共 50 条
  • [1] A Benchmark for Interpretability Methods in Deep Neural Networks
    Hooker, Sara
    Erhan, Dumitru
    Kindermans, Pieter-Jan
    Kim, Been
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [2] A graph-based interpretability method for deep neural networks
    Wang, Tao
    Zheng, Xiangwei
    Zhang, Lifeng
    Cui, Zhen
    Xu, Chunyan
    [J]. NEUROCOMPUTING, 2023, 555
  • [3] Interpretability of deep neural networks: A review of methods, classification and hardware
    Antamis, Thanasis
    Drosou, Anastasis
    Vafeiadis, Thanasis
    Nizamis, Alexandros
    Ioannidis, Dimosthenis
    Tzovaras, Dimitrios
    [J]. NEUROCOMPUTING, 2024, 601
  • [4] Prediction and Interpretability of Melting Points of Ionic Liquids Using Graph Neural Networks
    Feng, Haijun
    Qin, Lanlan
    Zhang, Bingxuan
    Zhou, Jian
    [J]. ACS OMEGA, 2024, 9 (14): : 16016 - 16025
  • [5] On the interpretability of quantum neural networks
    Pira, Lirande
    Ferrie, Chris
    [J]. QUANTUM MACHINE INTELLIGENCE, 2024, 6 (02)
  • [6] Comparison of interpretability methods in the context of deep neural networks for radiomics application
    Marchadour, Wistan
    Badic, Bogdan
    Maison, Jonas
    Hatt, Mathieu
    Vermet, Franck
    [J]. JOURNAL OF NUCLEAR MEDICINE, 2022, 63
  • [7] Graph neural networks: A review of methods and applications
    Zhou, Jie
    Cui, Ganqu
    Hu, Shengding
    Zhang, Zhengyan
    Yang, Cheng
    Liu, Zhiyuan
    Wang, Lifeng
    Li, Changcheng
    Sun, Maosong
    [J]. AI OPEN, 2020, 1 : 57 - 81
  • [8] Explainability Methods for Graph Convolutional Neural Networks
    Pope, Phillip E.
    Kolouri, Soheil
    Rostami, Mohammad
    Martin, Charles E.
    Hoffmann, Heiko
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 10764 - 10773
  • [9] Graph neural networks: A review of methods and applications
    Zhou, Jie
    Cui, Ganqu
    Hu, Shengding
    Zhang, Zhengyan
    Yang, Cheng
    Liu, Zhiyuan
    Wang, Lifeng
    Li, Changcheng
    Sun, Maosong
    [J]. AI OPEN, 2020, 1 : 57 - 81
  • [10] Wind turbine anomaly detection and identification based on graph neural networks with decision interpretability
    Jiang, Guoqian
    Yi, Zichen
    He, Qun
    [J]. MEASUREMENT SCIENCE AND TECHNOLOGY, 2024, 35 (11)