A Meta-Learning Approach for Training Explainable Graph Neural Networks

被引:7
|
作者
Spinelli, Indro [1 ]
Scardapane, Simone [1 ]
Uncini, Aurelio [1 ]
机构
[1] Sapienza Univ Rome, Dept Informat Engn Elect & Telecommun DIET, I-00184 Rome, Italy
关键词
Training; Optimization; Adaptation models; Task analysis; Predictive models; Graph neural networks; Feature extraction; Explainable Artificial Intelligence (AI); graph classification; graph neural network (GNN); meta learning; node classification;
D O I
10.1109/TNNLS.2022.3171398
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this article, we investigate the degree of explainability of graph neural networks (GNNs). The existing explainers work by finding global/local subgraphs to explain a prediction, but they are applied after a GNN has already been trained. Here, we propose a meta-explainer for improving the level of explainability of a GNN directly at training time, by steering the optimization procedure toward minima that allow post hoc explainers to achieve better results, without sacrificing the overall accuracy of GNN. Our framework (called MATE, MetA-Train to Explain) jointly trains a model to solve the original task, e.g., node classification, and to provide easily processable outputs for downstream algorithms that explain the model's decisions in a human-friendly way. In particular, we meta-train the model's parameters to quickly minimize the error of an instance-level GNNExplainer trained on-the-fly on randomly sampled nodes. The final internal representation relies on a set of features that can be ``better'' understood by an explanation algorithm, e.g., another instance of GNNExplainer. Our model-agnostic approach can improve the explanations produced for different GNN architectures and use any instance-based explainer to drive this process. Experiments on synthetic and real-world datasets for node and graph classification show that we can produce models that are consistently easier to explain by different algorithms. Furthermore, this increase in explainability comes at no cost to the accuracy of the model.
引用
收藏
页码:4647 / 4655
页数:9
相关论文
共 50 条
  • [1] Learning from the Past: Continual Meta-Learning with Bayesian Graph Neural Networks
    Luo, Yadan
    Huang, Zi
    Zhang, Zheng
    Wang, Ziwei
    Baktashmotlagh, Mahsa
    Yang, Yang
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 5021 - 5028
  • [2] Learning Precoding Policy with Inductive Biases: Graph Neural Networks or Meta-learning?
    Zhao, Baichuan
    Ma, Yang
    Wu, Jiajun
    Yang, Chenyang
    [J]. IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 4835 - 4840
  • [3] Towards Explainable Meta-learning
    Woznica, Katarzyna
    Biecek, Przemyslaw
    [J]. MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2021, PT I, 2021, 1524 : 505 - 520
  • [4] Meta-Learning in Neural Networks: A Survey
    Hospedales, Timothy
    Antoniou, Antreas
    Micaelli, Paul
    Storkey, Amos
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (09) : 5149 - 5169
  • [5] Training Noise-Robust Deep Neural Networks via Meta-Learning
    Wang, Zhen
    Hu, Guosheng
    Hu, Qinghua
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 4523 - 4532
  • [6] Modular Meta-Learning for Power Control via Random Edge Graph Neural Networks
    Nikoloska, Ivana
    Simeone, Osvaldo
    [J]. IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2023, 22 (01) : 457 - 470
  • [7] Hyperbolic Graph Neural Networks at Scale: A Meta Learning Approach
    Choudhary, Nurendra
    Rao, Nikhil
    Reddy, Chandan K.
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [8] A Meta-Learning Approach for Custom Model Training
    Eshratifar, Amir Erfan
    Abrishami, Mohammad Saeed
    Eigen, David
    Pedram, Massoud
    [J]. THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 9937 - 9938
  • [9] Meta-learning approach to neural network optimization
    Kordik, Pavel
    Koutnik, Jan
    Drchal, Jan
    Kovarik, Oleg
    Cepek, Miroslav
    Snorek, Miroslav
    [J]. NEURAL NETWORKS, 2010, 23 (04) : 568 - 582
  • [10] Learning to Propagate for Graph Meta-Learning
    Liu, Lu
    Zhou, Tianyi
    Long, Guodong
    Jiang, Jing
    Zhang, Chengqi
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32