EDoG: Adversarial Edge Detection For Graph Neural Networks

被引:3
|
作者
Xu, Xiaojun [1 ]
Wang, Hanzhang [2 ]
Lal, Alok [2 ]
Gunter, Carl A. [1 ]
Li, Bo [1 ]
机构
[1] Univ Illinois, Champaign, IL 60680 USA
[2] eBay, San Jose, CA USA
关键词
D O I
10.1109/SaTML54575.2023.00027
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph Neural Networks (GNNs) have been widely applied to different tasks such as bioinformatics, drug design, and social networks. However, recent studies have shown that GNNs are vulnerable to adversarial attacks which aim to mislead the node (or subgraph) classification prediction by adding subtle perturbations. In particular, several attacks against GNNs have been proposed by adding/deleting a small amount of edges, which have caused serious security concerns. Detecting these attacks is challenging due to the small magnitude of perturbation and the discrete nature of graph data. In this paper, we propose a general adversarial edge detection pipeline EDoG without requiring knowledge of the attack strategies based on graph generation. Specifically, we propose a novel graph generation approach combined with link prediction to detect suspicious adversarial edges. To effectively train the graph generative model, we sample several sub-graphs from the given graph data. We show that since the number of adversarial edges is usually low in practice, with low probability the sampled sub-graphs will contain adversarial edges based on the union bound. In addition, considering the strong attacks which perturb a large number of edges, we propose a set of novel features to perform outlier detection as the preprocessing for our detection. Extensive experimental results on three real-world graph datasets including a private transaction rule dataset from a major company and two types of synthetic graphs with controlled properties (e.g., Erdos-Renyi and scale-free graphs) show that EDoG can achieve above 0.8 AUC against four state-of-the-art unseen attack strategies without requiring any knowledge about the attack type (e.g., degree of the target victim node); and around 0.85 with knowledge of the attack type. EDoG significantly outperforms traditional malicious edge detection baselines. We also show that an adaptive attack with full knowledge of our detection pipeline is difficult to bypass it. Our results shed light on several principles to improve the robustness of GNNs.
引用
收藏
页码:291 / 305
页数:15
相关论文
共 50 条
  • [41] Adversarial Object Rearrangement in Constrained Environments with Heterogeneous Graph Neural Networks
    Lou, Xibai
    Yu, Houjian
    Worobel, Ross
    Yang, Yang
    Choi, Changhyun
    2023 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, IROS, 2023, : 1008 - 1015
  • [42] Defending adversarial attacks in Graph Neural Networks via tensor enhancement
    Zhang, Jianfu
    Hong, Yan
    Cheng, Dawei
    Zhang, Liqing
    Zhao, Qibin
    Pattern Recognition, 2025, 158
  • [43] HeteroGuard: Defending Heterogeneous Graph Neural Networks against Adversarial Attacks
    Kumarasinghe, Udesh
    Nabeel, Mohamed
    De Zoysa, Kasun
    Gunawardana, Kasun
    Elvitigala, Charitha
    2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW, 2022, : 698 - 705
  • [44] Structack: Structure-based Adversarial Attacks on Graph Neural Networks
    Hussain, Hussain
    Duricic, Tomislav
    Lex, Elisabeth
    Helic, Denis
    Strohmaier, Markus
    Kern, Roman
    PROCEEDINGS OF THE 32ND ACM CONFERENCE ON HYPERTEXT AND SOCIAL MEDIA (HT '21), 2021, : 111 - 120
  • [45] Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation
    Wang, Binghui
    Jia, Jinyuan
    Cao, Xiaoyu
    Gong, Neil Zhenqiang
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 1645 - 1653
  • [46] Adversarial Label-Flipping Attack and Defense for Graph Neural Networks
    Zhang, Mengmei
    Hu, Linmei
    Shi, Chuan
    Wang, Xiao
    20TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2020), 2020, : 791 - 800
  • [47] AIC-GNN: Adversarial information completion for graph neural networks
    Wei, Quanmin
    Wang, Jinyan
    Fu, Xingcheng
    Hu, Jun
    Li, Xianxian
    INFORMATION SCIENCES, 2023, 626 : 166 - 179
  • [48] Compressing Deep Graph Neural Networks via Adversarial Knowledge Distillation
    He, Huarui
    Wang, Jie
    Zhang, Zhanqiu
    Wu, Feng
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 534 - 544
  • [49] Black-box Adversarial Attack and Defense on Graph Neural Networks
    Li, Haoyang
    Di, Shimin
    Li, Zijian
    Chen, Lei
    Cao, Jiannong
    2022 IEEE 38TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2022), 2022, : 1017 - 1030
  • [50] On a Detection Method of Adversarial Samples for Deep Neural Networks
    Govaers, Felix
    Baggenstoss, Paul
    2021 IEEE 24TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION), 2021, : 423 - 427