HeteroGuard: Defending Heterogeneous Graph Neural Networks against Adversarial Attacks

被引:1
|
作者
Kumarasinghe, Udesh [1 ,2 ]
Nabeel, Mohamed [3 ]
De Zoysa, Kasun [1 ]
Gunawardana, Kasun [1 ]
Elvitigala, Charitha [2 ]
机构
[1] Univ Colombo, Sch Comp, Colombo, Sri Lanka
[2] SCoRe Lab, Colombo, Sri Lanka
[3] Palo Alto Networks Inc, Palo Alto, CA USA
关键词
GNN; Adversarial attacks; Defenses; Heterogeneous graphs;
D O I
10.1109/ICDMW58026.2022.00096
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph neural networks (GNNs) have achieved remarkable success in many application domains including drug discovery, program analysis, social networks, and cyber security. However, it has been shown that they are not robust against adversarial attacks. In the recent past, many adversarial attacks against homogeneous GNNs and defenses have been proposed. However, most of these attacks and defenses are ineffective on heterogeneous graphs as these algorithms optimize under the assumption that all edge and node types are of the same and further they introduce semantically incorrect edges to perturbed graphs. Here, we first develop, HetePR-BCD, a training time (i.e. poisoning) adversarial attack on heterogeneous graphs that outperforms the start of the art attacks proposed in the literature. Our experimental results on three benchmark heterogeneous graphs show that our attack, with a small perturbation budget of 15%, degrades the performance up to 32% (F1 score) compared to existing ones. It is concerning to mention that existing defenses are not robust against our attack. These defenses primarily modify the GNN's neural message passing operators assuming that adversarial attacks tend to connect nodes with dissimilar features, but this assumption does not hold in heterogeneous graphs. We construct HeteroGuard, an effective defense against training time attacks including HetePR-BCD on heterogeneous models. HeteroGuard outperforms the existing defenses by 3-8% on F1 score depending on the benchmark dataset.
引用
收藏
页码:698 / 705
页数:8
相关论文
共 50 条
  • [31] A Lightweight Method for Defense Graph Neural Networks Adversarial Attacks
    Qiao, Zhi
    Wu, Zhenqiang
    Chen, Jiawang
    Ren, Ping'an
    Yu, Zhiliang
    [J]. ENTROPY, 2023, 25 (01)
  • [32] Robust Regularization Design of Graph Neural Networks Against Adversarial Attacks Based on Lyapunov Theory
    Yan, Wenjie
    Li, Ziqi
    Qi, Yongjun
    [J]. CHINESE JOURNAL OF ELECTRONICS, 2024, 33 (03) : 732 - 741
  • [33] Defending Against Free-Riders Attacks in Distributed Generative Adversarial Networks
    Zhao, Zilong
    Huang, Jiyue
    Chen, Lydia Y.
    Roos, Stefanie
    [J]. FINANCIAL CRYPTOGRAPHY AND DATA SECURITY, FC 2023, PT II, 2024, 13951 : 200 - 217
  • [34] Robust Regularization Design of Graph Neural Networks Against Adversarial Attacks Based on Lyapunov Theory
    Wenjie YAN
    Ziqi LI
    Yongjun QI
    [J]. Chinese Journal of Electronics., 2024, 33 (03) - 741
  • [35] Neuron Selecting: Defending Against Adversarial Examples in Deep Neural Networks
    Zhang, Ming
    Li, Hu
    Kuang, Xiaohui
    Pang, Ling
    Wu, Zhendong
    [J]. INFORMATION AND COMMUNICATIONS SECURITY (ICICS 2019), 2020, 11999 : 613 - 629
  • [36] ShieldNets: Defending Against Adversarial Attacks Using Probabilistic Adversarial Robustness
    Theagarajan, Rajkumar
    Chen, Ming
    Bhanu, Bir
    Zhang, Jing
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 6981 - 6989
  • [37] Defending Convolutional Neural Network-Based Object Detectors Against Adversarial Attacks
    Cheng, Jeffrey
    Hu, Victor
    [J]. 2020 9TH IEEE INTEGRATED STEM EDUCATION CONFERENCE (ISEC 2020), 2020,
  • [38] Defending Against Adversarial Attacks in Speaker Verification Systems
    Chang, Li-Chi
    Chen, Zesheng
    Chen, Chao
    Wang, Guoping
    Bi, Zhuming
    [J]. 2021 IEEE INTERNATIONAL PERFORMANCE, COMPUTING, AND COMMUNICATIONS CONFERENCE (IPCCC), 2021,
  • [39] Defending Deep Learning Models Against Adversarial Attacks
    Mani, Nag
    Moh, Melody
    Moh, Teng-Sheng
    [J]. INTERNATIONAL JOURNAL OF SOFTWARE SCIENCE AND COMPUTATIONAL INTELLIGENCE-IJSSCI, 2021, 13 (01): : 72 - 89
  • [40] Defending Against Adversarial Attacks Using Random Forest
    Ding, Yifan
    Wang, Liqiang
    Zhang, Huan
    Yi, Jinfeng
    Fan, Deliang
    Gong, Boqing
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2019), 2019, : 105 - 114