Robust explanations for graph neural network with neuron explanation component

被引:1
|
作者
Chen, Jinyin [1 ,2 ]
Huang, Guohan [3 ]
Zheng, Haibin [1 ]
Du, Hang [4 ]
Zhang, Jian [5 ]
机构
[1] Zhejiang Univ Technol, Inst Cyberspace Secur, Hangzhou 310023, Peoples R China
[2] Zhejiang Univ Technol, Coll Informat Engn, Hangzhou 310023, Peoples R China
[3] Yiqiyin Hangzhou Technol Co Ltd, Hangzhou 311215, Peoples R China
[4] Natl Univ Def Technol, Coll Syst Engn, Changsha 410073, Peoples R China
[5] Hangzhou Dianzi Univ, Sch Cyberspace, Hangzhou 310000, Peoples R China
基金
中国国家自然科学基金;
关键词
Graph neural network; Interpretability; Neuron path distribution; Robustness; Adversarial detection;
D O I
10.1016/j.ins.2023.119785
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Graph neural networks (GNNs) have been successfully applied to a variety of graph-structure analysis tasks. Besides their outstanding performance, the explanation for GNNs' predictions is also an inextricable problem, which hinders trust in GNNs under practical scenarios. Consequently, great efforts have been made for interpreters of GNNs to understand their behavior. However, the existing works are still suffering two main problems: (i) explanation shifting in normal explanation the explanations provided by the interpreters are insufficient to precisely explain the behavior of the GNNs; (ii) gullibility failure in adversarial detection the interpreters are easily bypassed by well-designed adversarial perturbations, resulting in the omission of anomalies. To address these issues, we propose a robust interpreter for GNN, named Neuron Explanation Component (NEC), from the perspective of the model neuron activation pattern. It measures the difference in GNNs' neuron path distribution between subgraphs and the original graph to generate explanations for the model's prediction. NEC outperforms previous works in explanation accuracy, robustness against adversarial attacks and transferability among different GNN's interpreters. Extensive evaluations are conducted on 4 benchmarks, 6 interpreters and 2 scenarios (i.e., normal explanation and adversarial detection). Significant improvements in explanation ability and adversarial detection performance demonstrate NEC's superior performance.
引用
收藏
页数:18
相关论文
共 50 条
  • [21] Robust Hashing for Neural Network Models via Heterogeneous Graph Representation
    Huang, Lin
    Tao, Yitong
    Qin, Chuan
    Zhang, Xinpeng
    [J]. IEEE Signal Processing Letters, 2024, 31 : 2640 - 2644
  • [22] Robust Self-Supervised Structural Graph Neural Network for Social Network Prediction
    Zhang, Yanfu
    Gao, Hongchang
    Pei, Jian
    Huang, Heng
    [J]. PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, : 1352 - 1361
  • [23] A robust training of dendritic neuron model neural network for time series prediction
    Ayşe Yilmaz
    Ufuk Yolcu
    [J]. Neural Computing and Applications, 2023, 35 : 10387 - 10406
  • [24] A robust training of dendritic neuron model neural network for time series prediction
    Yilmaz, Ayse
    Yolcu, Ufuk
    [J]. NEURAL COMPUTING & APPLICATIONS, 2023, 35 (14): : 10387 - 10406
  • [25] Robust Airport Surface Object Detection Based on Graph Neural Network
    Tang, Wenyi
    Li, Hongjue
    [J]. APPLIED SCIENCES-BASEL, 2024, 14 (09):
  • [26] Towards robust explanations for deep neural networks
    Dombrowski, Ann-Kathrin
    Anders, Christopher J.
    Mueller, Klaus-Robert
    Kessel, Pan
    [J]. PATTERN RECOGNITION, 2022, 121
  • [27] Explanation-based Graph Neural Networks for Graph Classification
    Seo, Sangwoo
    Jung, Seungjun
    Kim, Changick
    [J]. 2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 2836 - 2842
  • [28] Chain Graph Explanation of Neural Network Based on Feature-Level Class Confusion
    Hwang, Hyekyoung
    Park, Eunbyung
    Shin, Jitae
    [J]. APPLIED SCIENCES-BASEL, 2022, 12 (03):
  • [29] Robust Underwater Visual Graph SLAM using a Simanese Neural Network and Robust Image Matching
    Burguera, Antoni
    [J]. PROCEEDINGS OF THE 17TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISAPP), VOL 4, 2022, : 591 - 598
  • [30] Evaluating Recurrent Neural Network Explanations
    Arras, Leila
    Osman, Ahmed
    Mueller, Klaus-Robert
    Samek, Wojciech
    [J]. BLACKBOXNLP WORKSHOP ON ANALYZING AND INTERPRETING NEURAL NETWORKS FOR NLP AT ACL 2019, 2019, : 113 - 126