EDoG: Adversarial Edge Detection For Graph Neural Networks

被引:3
|
作者
Xu, Xiaojun [1 ]
Wang, Hanzhang [2 ]
Lal, Alok [2 ]
Gunter, Carl A. [1 ]
Li, Bo [1 ]
机构
[1] Univ Illinois, Champaign, IL 60680 USA
[2] eBay, San Jose, CA USA
关键词
D O I
10.1109/SaTML54575.2023.00027
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph Neural Networks (GNNs) have been widely applied to different tasks such as bioinformatics, drug design, and social networks. However, recent studies have shown that GNNs are vulnerable to adversarial attacks which aim to mislead the node (or subgraph) classification prediction by adding subtle perturbations. In particular, several attacks against GNNs have been proposed by adding/deleting a small amount of edges, which have caused serious security concerns. Detecting these attacks is challenging due to the small magnitude of perturbation and the discrete nature of graph data. In this paper, we propose a general adversarial edge detection pipeline EDoG without requiring knowledge of the attack strategies based on graph generation. Specifically, we propose a novel graph generation approach combined with link prediction to detect suspicious adversarial edges. To effectively train the graph generative model, we sample several sub-graphs from the given graph data. We show that since the number of adversarial edges is usually low in practice, with low probability the sampled sub-graphs will contain adversarial edges based on the union bound. In addition, considering the strong attacks which perturb a large number of edges, we propose a set of novel features to perform outlier detection as the preprocessing for our detection. Extensive experimental results on three real-world graph datasets including a private transaction rule dataset from a major company and two types of synthetic graphs with controlled properties (e.g., Erdos-Renyi and scale-free graphs) show that EDoG can achieve above 0.8 AUC against four state-of-the-art unseen attack strategies without requiring any knowledge about the attack type (e.g., degree of the target victim node); and around 0.85 with knowledge of the attack type. EDoG significantly outperforms traditional malicious edge detection baselines. We also show that an adaptive attack with full knowledge of our detection pipeline is difficult to bypass it. Our results shed light on several principles to improve the robustness of GNNs.
引用
收藏
页码:291 / 305
页数:15
相关论文
共 50 条
  • [21] Adversarial image detection in deep neural networks
    Fabio Carrara
    Fabrizio Falchi
    Roberto Caldelli
    Giuseppe Amato
    Rudy Becarelli
    Multimedia Tools and Applications, 2019, 78 : 2815 - 2835
  • [22] Adversarial Attack on Graph Neural Networks as An Influence Maximization Problem
    Ma, Jiaqi
    Deng, Junwei
    Mei, Qiaozhu
    WSDM'22: PROCEEDINGS OF THE FIFTEENTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, 2022, : 675 - 685
  • [23] Task and Model Agnostic Adversarial Attack on Graph Neural Networks
    Sharma, Kartik
    Verma, Samidha
    Medya, Sourav
    Bhattacharya, Arnab
    Ranu, Sayan
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 12, 2023, : 15091 - 15099
  • [24] UnboundAttack: Generating Unbounded Adversarial Attacks to Graph Neural Networks
    Ennadir, Sofiane
    Alkhatib, Amr
    Nikolentzos, Giannis
    Vazirgiannis, Michalis
    Bostrom, Henrik
    COMPLEX NETWORKS & THEIR APPLICATIONS XII, VOL 1, COMPLEX NETWORKS 2023, 2024, 1141 : 100 - 111
  • [25] Adversarial Weight Perturbation Improves Generalization in Graph Neural Networks
    Wu, Yihan
    Bojchevski, Aleksandar
    Huang, Heng
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 9, 2023, : 10417 - 10425
  • [26] Bayesian Adversarial Attack on Graph Neural Networks (Student Abstract)
    Liu, Xiao
    Zhao, Jing
    Sun, Shiliang
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 13867 - 13868
  • [27] Two-level adversarial attacks for graph neural networks
    Song, Chengxi
    Niu, Lingfeng
    Lei, Minglong
    INFORMATION SCIENCES, 2024, 654
  • [28] GNNGUARD: Defending Graph Neural Networks against Adversarial Attacks
    Zhang, Xiang
    Zitnik, Marinka
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [29] Towards More Practical Adversarial Attacks on Graph Neural Networks
    Ma, Jiaqi
    Ding, Shuangrui
    Mei, Qiaozhu
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [30] A Lightweight Method for Defense Graph Neural Networks Adversarial Attacks
    Qiao, Zhi
    Wu, Zhenqiang
    Chen, Jiawang
    Ren, Ping'an
    Yu, Zhiliang
    ENTROPY, 2023, 25 (01)