Evaluating explainability for graph neural networks

被引:0
|
作者
Chirag Agarwal
Owen Queen
Himabindu Lakkaraju
Marinka Zitnik
机构
[1] Media and Data Science Research Lab,Department of Biomedical Informatics
[2] Adobe,Department of Electrical Engineering and Computer Science
[3] Harvard University,Department of Computer Science
[4] University of Tennessee,undefined
[5] Harvard Business School,undefined
[6] Harvard Data Science Initiative,undefined
[7] Harvard University,undefined
[8] Broad Institute of MIT and Harvard,undefined
来源
关键词
D O I
暂无
中图分类号
学科分类号
摘要
As explanations are increasingly used to understand the behavior of graph neural networks (GNNs), evaluating the quality and reliability of GNN explanations is crucial. However, assessing the quality of GNN explanations is challenging as existing graph datasets have no or unreliable ground-truth explanations. Here, we introduce a synthetic graph data generator, ShapeGGen, which can generate a variety of benchmark datasets (e.g., varying graph sizes, degree distributions, homophilic vs. heterophilic graphs) accompanied by ground-truth explanations. The flexibility to generate diverse synthetic datasets and corresponding ground-truth explanations allows ShapeGGen to mimic the data in various real-world areas. We include ShapeGGen and several real-world graph datasets in a graph explainability library, GraphXAI. In addition to synthetic and real-world graph datasets with ground-truth explanations, GraphXAI provides data loaders, data processing functions, visualizers, GNN model implementations, and evaluation metrics to benchmark GNN explainability methods.
引用
下载
收藏
相关论文
共 50 条
  • [1] Evaluating explainability for graph neural networks
    Agarwal, Chirag
    Queen, Owen
    Lakkaraju, Himabindu
    Zitnik, Marinka
    SCIENTIFIC DATA, 2023, 10 (01)
  • [2] Evaluating Neighbor Explainability for Graph Neural Networks
    Llorente, Oscar
    Fawzy, Rana
    Keown, Jared
    Horemuz, Michal
    Vaderna, Peter
    Laki, Sandor
    Kotroczo, Roland
    Csoma, Rita
    Szalai-Gindl, Janos Mark
    EXPLAINABLE ARTIFICIAL INTELLIGENCE, PT I, XAI 2024, 2024, 2153 : 383 - 402
  • [3] On Glocal Explainability of Graph Neural Networks
    Lv, Ge
    Chen, Lei
    Cao, Caleb Chen
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, DASFAA 2022, PT I, 2022, : 648 - 664
  • [4] Explainability Methods for Graph Convolutional Neural Networks
    Pope, Phillip E.
    Kolouri, Soheil
    Rostami, Mohammad
    Martin, Charles E.
    Hoffmann, Heiko
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 10764 - 10773
  • [5] Explainability in Graph Neural Networks: A Taxonomic Survey
    Yuan, Hao
    Yu, Haiyang
    Gui, Shurui
    Ji, Shuiwang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (05) : 5782 - 5799
  • [6] On Explainability of Graph Neural Networks via Subgraph Explorations
    Yuan, Hao
    Yu, Haiyang
    Wang, Jie
    Li, Kang
    Ji, Shuiwang
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [7] On Data-Aware Global Explainability of Graph Neural Networks
    Lv, Ge
    Chen, Lei
    PROCEEDINGS OF THE VLDB ENDOWMENT, 2023, 16 (11): : 3447 - 3460
  • [8] Towards Multi-Grained Explainability for Graph Neural Networks
    Wang, Xiang
    Wu, Ying-Xin
    Zhang, An
    He, Xiangnan
    Chua, Tat-Seng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,
  • [9] Evaluating the Explainability of Neural Rankers
    Pandian, Saran
    Ganguly, Debasis
    MacAvaney, Sean
    ADVANCES IN INFORMATION RETRIEVAL, ECIR 2024, PT IV, 2024, 14611 : 369 - 383
  • [10] GraphFramEx: Towards Systematic Evaluation of Explainability Methods for Graph Neural Networks
    Amara, Kenza
    Ying, Rex
    Zhang, Zitao
    Han, Zhichao
    Shan, Yinan
    Brandes, Ulrik
    Schemm, Sebastian
    Zhang, Ce
    LEARNING ON GRAPHS CONFERENCE, VOL 198, 2022, 198