Evaluating explainability for graph neural networks

被引:0
|
作者
Chirag Agarwal
Owen Queen
Himabindu Lakkaraju
Marinka Zitnik
机构
[1] Media and Data Science Research Lab,Department of Biomedical Informatics
[2] Adobe,Department of Electrical Engineering and Computer Science
[3] Harvard University,Department of Computer Science
[4] University of Tennessee,undefined
[5] Harvard Business School,undefined
[6] Harvard Data Science Initiative,undefined
[7] Harvard University,undefined
[8] Broad Institute of MIT and Harvard,undefined
来源
关键词
D O I
暂无
中图分类号
学科分类号
摘要
As explanations are increasingly used to understand the behavior of graph neural networks (GNNs), evaluating the quality and reliability of GNN explanations is crucial. However, assessing the quality of GNN explanations is challenging as existing graph datasets have no or unreliable ground-truth explanations. Here, we introduce a synthetic graph data generator, ShapeGGen, which can generate a variety of benchmark datasets (e.g., varying graph sizes, degree distributions, homophilic vs. heterophilic graphs) accompanied by ground-truth explanations. The flexibility to generate diverse synthetic datasets and corresponding ground-truth explanations allows ShapeGGen to mimic the data in various real-world areas. We include ShapeGGen and several real-world graph datasets in a graph explainability library, GraphXAI. In addition to synthetic and real-world graph datasets with ground-truth explanations, GraphXAI provides data loaders, data processing functions, visualizers, GNN model implementations, and evaluation metrics to benchmark GNN explainability methods.
引用
下载
收藏
相关论文
共 50 条
  • [21] A Local Explainability Technique for Graph Neural Topic Models
    Bharathwajan Rajendran
    Chandran G. Vidya
    J. Sanil
    S. Asharaf
    Human-Centric Intelligent Systems, 2024, 4 (1): : 53 - 76
  • [22] An analysis of explainability methods for convolutional neural networks
    Vonder Haar, Lynn
    Elvira, Timothy
    Ochoa, Omar
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 117
  • [23] Evaluating the generalizability of graph neural networks for predicting collision cross section
    Engler Hart, Chloe
    Preto, Antonio Jose
    Chanana, Shaurya
    Healey, David
    Kind, Tobias
    Domingo-Fernandez, Daniel
    JOURNAL OF CHEMINFORMATICS, 2024, 16 (01):
  • [24] Towards Explainability of non-Convolutional Neural Networks
    Doese, Jonas
    Weis, Torben
    UBICOMP/ISWC '21 ADJUNCT: PROCEEDINGS OF THE 2021 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING AND PROCEEDINGS OF THE 2021 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS, 2021, : 100 - 103
  • [25] Layer factor analysis in convolutional neural networks for explainability
    Lopez-Gonzalez, Clara I.
    Gomez-Silva, Maria J.
    Besada-Portas, Eva
    Pajares, Gonzalo
    APPLIED SOFT COMPUTING, 2024, 150
  • [26] VERIX: Towards Verified Explainability of Deep Neural Networks
    Wu, Min
    Wu, Haoze
    Barrett, Clark
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [27] Seeking Interpretability and Explainability in Binary Activated Neural Networks
    Leblanc, Benjamin
    Germain, Pascal
    EXPLAINABLE ARTIFICIAL INTELLIGENCE, PT I, XAI 2024, 2024, 2153 : 3 - 20
  • [28] Enhancing Explainability of Neural Networks Through Architecture Constraints
    Yang, Zebin
    Zhang, Aijun
    Sudjianto, Agus
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (06) : 2610 - 2621
  • [29] Toward Embedding Ambiguity-Sensitive Graph Neural Network Explainability
    Liu, Xiaofeng
    Ma, Yinglong
    Chen, Degang
    Liu, Ling
    IEEE Transactions on Fuzzy Systems, 2024, 32 (12) : 6951 - 6964
  • [30] Improving Molecular Graph Neural Network Explainability with Orthonormalization and Induced Sparsity
    Henderson, Ryan
    Cleyert, Djork-Arne
    Montanari, Floriane
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139