Concept Distillation in Graph Neural Networks

被引:3
|
作者
Magister, Lucie Charlotte [1 ]
Barbiero, Pietro [1 ,2 ]
Kazhdan, Dmitry [1 ]
Siciliano, Federico [3 ]
Ciravegna, Gabriele [4 ]
Silvestri, Fabrizio [3 ]
Jamnik, Mateja [1 ]
Lio, Pietro [1 ]
机构
[1] Univ Cambridge, Cambridge CB3 0FD, England
[2] Univ Svizzera Italiana, CH-6900 Lugano, Switzerland
[3] Univ Roma La Sapienza, I-00185 Rome, Italy
[4] Politecn Torino, I-10129 Turin, Italy
基金
欧盟地平线“2020”;
关键词
Explainability; Concepts; Graph Neural Networks;
D O I
10.1007/978-3-031-44070-0_12
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The opaque reasoning of Graph Neural Networks induces a lack of human trust. Existing graph network explainers attempt to address this issue by providing post-hoc explanations, however, they fail to make the model itself more interpretable. To fill this gap, we introduce the Concept Distillation Module, the first differentiable concept-distillation approach for graph networks. The proposed approach is a layer that can be plugged into any graph network to make it explainable by design, by first distilling graph concepts from the latent space and then using these to solve the task. Our results demonstrate that this approach allows graph networks to: (i) attain model accuracy comparable with their equivalent vanilla versions, (ii) distill meaningful concepts achieving 4.8% higher concept completeness and 36.5% lower purity scores on average, (iii) provide high-quality concept-based logic explanations for their prediction, and (iv) support effective interventions at test time: these can increase human trust as well as improve model performance.
引用
收藏
页码:233 / 255
页数:23
相关论文
共 50 条
  • [1] On Representation Knowledge Distillation for Graph Neural Networks
    Joshi, Chaitanya K.
    Liu, Fayao
    Xun, Xu
    Lin, Jie
    Foo, Chuan Sheng
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (04) : 4656 - 4667
  • [2] Graph-Free Knowledge Distillation for Graph Neural Networks
    Deng, Xiang
    Zhang, Zhongfei
    [J]. PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 2321 - 2327
  • [3] Knowledge Distillation Improves Graph Structure Augmentation for Graph Neural Networks
    Wu, Lirong
    Lin, Haitao
    Huang, Yufei
    Li, Stan Z.
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [4] RELIANT: Fair Knowledge Distillation for Graph Neural Networks
    Dong, Yushun
    Zhang, Binchi
    Yuan, Yiling
    Zou, Na
    Wang, Qi
    Li, Jundong
    [J]. PROCEEDINGS OF THE 2023 SIAM INTERNATIONAL CONFERENCE ON DATA MINING, SDM, 2023, : 154 - +
  • [5] Online adversarial knowledge distillation for graph neural networks
    Wang, Can
    Wang, Zhe
    Chen, Defang
    Zhou, Sheng
    Feng, Yan
    Chen, Chun
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2024, 237
  • [6] A New Concept for Explaining Graph Neural Networks
    Himmelhuber, Anna
    Grimm, Stephan
    Zillner, Sonja
    Ringsquandl, Martin
    Joblin, Mitchell
    Runkler, Thomas
    [J]. NESY 2021: NEURAL-SYMBOLIC LEARNING AND REASONING, 2021, 2986 : 1 - 5
  • [7] Accelerating Molecular Graph Neural Networks via Knowledge Distillation
    Kelvinius, Filip Ekstrom
    Georgiev, Dimitar
    Toshev, Artur Petrov
    Gasteiger, Johannes
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36, NEURIPS 2023, 2023,
  • [8] Geometric Knowledge Distillation: Topology Compression for Graph Neural Networks
    Yang, Chenxiao
    Wu, Qitian
    Yan, Junchi
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [9] Knowledge Distillation with Graph Neural Networks for Epileptic Seizure Detection
    Zheng, Qinyue
    Venkitaraman, Arun
    Petravic, Simona
    Frossard, Pascal
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: APPLIED DATA SCIENCE AND DEMO TRACK, ECML PKDD 2023, PT VI, 2023, 14174 : 547 - 563
  • [10] Boosting Graph Neural Networks via Adaptive Knowledge Distillation
    Guo, Zhichun
    Zhang, Chunhui
    Fan, Yujie
    Tian, Yijun
    Zhang, Chuxu
    Chawla, Nitesh V.
    [J]. THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 6, 2023, : 7793 - 7801