Towards Few-Shot Self-explaining Graph Neural Networks

被引:0
|
作者
Peng, Jingyu [1 ]
Liu, Qi [1 ,2 ]
Yue, Linan [1 ]
Zhang, Zaixi [1 ]
Zhang, Kai [1 ]
Sha, Yunhao [1 ]
机构
[1] Univ Sci & Technol China, State Key Lab Cognit Intelligence, Hefei, Peoples R China
[2] Hefei Comprehens Natl Sci Ctr, Inst Artificial Intelligence, Hefei, Peoples R China
关键词
Explainability; Graph Neural Network; Meta Learning;
D O I
10.1007/978-3-031-70365-2_7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent advancements in Graph Neural Networks (GNNs) have spurred an upsurge of research dedicated to enhancing the explainability of GNNs, particularly in critical domains such as medicine. A promising approach is the self-explaining method, which outputs explanations along with predictions. However, existing self-explaining models require a large amount of training data, rendering them unavailable in few-shot scenarios. To address this challenge, in this paper, we propose a Meta-learned Self-Explaining GNN (MSE-GNN), a novel framework that generates explanations to support predictions in few-shot settings. MSE-GNN adopts a two-stage self-explaining structure, consisting of an explainer and a predictor. Specifically, the explainer first imitates the attention mechanism of humans to select the explanation subgraph, whereby attention is naturally paid to regions containing important characteristics. Subsequently, the predictor mimics the decision-making process, which makes predictions based on the generated explanation. Moreover, with a novel meta-training process and a designed mechanism that exploits task information, MSE-GNN can achieve remarkable performance on new few-shot tasks. Extensive experimental results on four datasets demonstrate that MSE-GNN can achieve superior performance on prediction tasks while generating high-quality explanations compared with existing methods. The code is publicly available at https://github.com/jypeng28/MSE-GNN.
引用
收藏
页码:109 / 126
页数:18
相关论文
共 50 条
  • [21] Graph Prototypical Networks for Few-shot Learning on Attributed Networks
    Ding, Kaize
    Wang, Jianling
    Li, Jundong
    Shu, Kai
    Liu, Chenghao
    Liu, Huan
    CIKM '20: PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, 2020, : 295 - 304
  • [22] Explaining Siamese Networks in Few-Shot Learning for Audio Data
    Fedele, Andrea
    Guidotti, Riccardo
    Pedreschi, Dino
    DISCOVERY SCIENCE (DS 2022), 2022, 13601 : 509 - 524
  • [23] Concept-Oriented Self-Explaining Neural Networks
    Park, Min Sue
    Hwang, Hyung Ju
    NEURAL PROCESSING LETTERS, 2023, 55 (08) : 10873 - 10904
  • [24] Learning Dual-Pooling Graph Neural Networks for Few-Shot Video Classification
    Hu, Yufan
    Gao, Junyu
    Xu, Changsheng
    IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 23 : 4285 - 4296
  • [25] Category Decoupled Few-Shot Classification for Graph Neural Network
    Deng, Gelong
    Huang, Guoheng
    Chen, Ziyan
    Computer Engineering and Applications, 2024, 60 (02) : 129 - 136
  • [26] Local feature graph neural network for few-shot learning
    Weng P.
    Dong S.
    Ren L.
    Zou K.
    Journal of Ambient Intelligence and Humanized Computing, 2023, 14 (04) : 4343 - 4354
  • [27] An Overview of Deep Neural Networks for Few-Shot Learning
    Zhao, Juan
    Kong, Lili
    Lv, Jiancheng
    BIG DATA MINING AND ANALYTICS, 2025, 8 (01): : 145 - 188
  • [28] Towards Self-explaining Intelligent Environments
    Autexier, Serge
    Drechsler, Rolf
    2018 7TH INTERNATIONAL CONFERENCE ON RELIABILITY, INFOCOM TECHNOLOGIES AND OPTIMIZATION (TRENDS AND FUTURE DIRECTIONS) (ICRITO) (ICRITO), 2018, : 82 - 87
  • [29] Q-SENN: Quantized Self-Explaining Neural Networks
    Norrenbrock, Thomas
    Rudolph, Marco
    Rosenhahn, Bodo
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 19, 2024, : 21482 - 21491
  • [30] Towards Self-Explaining Ambient Applications
    Kordts, Boerge
    Gerlach, Bennet
    Schrader, Andreas
    THE 14TH ACM INTERNATIONAL CONFERENCE ON PERVASIVE TECHNOLOGIES RELATED TO ASSISTIVE ENVIRONMENTS, PETRA 2021, 2021, : 383 - 390