Towards Defense Against Adversarial Attacks on Graph Neural Networks via Calibrated Co-Training

被引:0
|
作者
Xu-Gang Wu
Hui-Jun Wu
Xu Zhou
Xiang Zhao
Kai Lu
机构
[1] National University of Defense Technology,College of Computer
[2] National University of Defense Technology,College of Systems Engineering
关键词
adversarial defense; graph neural network; multi-view; co-training;
D O I
暂无
中图分类号
学科分类号
摘要
Graph neural networks (GNNs) have achieved significant success in graph representation learning. Nevertheless, the recent work indicates that current GNNs are vulnerable to adversarial perturbations, in particular structural perturbations. This, therefore, narrows the application of GNN models in real-world scenarios. Such vulnerability can be attributed to the model’s excessive reliance on incomplete data views (e.g., graph convolutional networks (GCNs) heavily rely on graph structures to make predictions). By integrating the information from multiple perspectives, this problem can be effectively addressed, and typical views of graphs include the node feature view and the graph structure view. In this paper, we propose C2oG, which combines these two typical views to train sub-models and fuses their knowledge through co-training. Due to the orthogonality of the views, sub-models in the feature view tend to be robust against the perturbations targeted at sub-models in the structure view. C2oG allows sub-models to correct one another mutually and thus enhance the robustness of their ensembles. In our evaluations, C2oG significantly improves the robustness of graph models against adversarial attacks without sacrificing their performance on clean datasets.
引用
收藏
页码:1161 / 1175
页数:14
相关论文
共 50 条
  • [1] Towards Defense Against Adversarial Attacks on Graph Neural Networks via Calibrated Co-Training
    Wu, Xu-Gang
    Wu, Hui-Jun
    Zhou, Xu
    Zhao, Xiang
    Lu, Kai
    [J]. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY, 2022, 37 (05) : 1161 - 1175
  • [2] Robust Graph Neural Networks Against Adversarial Attacks via Jointly Adversarial Training
    Tian, Hu
    Ye, Bowei
    Zheng, Xiaolong
    Wu, Desheng Dash
    [J]. IFAC PAPERSONLINE, 2020, 53 (05): : 420 - 425
  • [3] A Lightweight Method for Defense Graph Neural Networks Adversarial Attacks
    Qiao, Zhi
    Wu, Zhenqiang
    Chen, Jiawang
    Ren, Ping'an
    Yu, Zhiliang
    [J]. ENTROPY, 2023, 25 (01)
  • [4] Adversarial attacks against dynamic graph neural networks via node injection
    Jiang, Yanan
    Xia, Hui
    [J]. HIGH-CONFIDENCE COMPUTING, 2024, 4 (01):
  • [6] Defending against adversarial attacks on graph neural networks via similarity property
    Yao, Minghong
    Yu, Haizheng
    Bian, Hong
    [J]. AI COMMUNICATIONS, 2023, 36 (01) : 27 - 39
  • [7] Graph Structure Reshaping Against Adversarial Attacks on Graph Neural Networks
    Wang, Haibo
    Zhou, Chuan
    Chen, Xin
    Wu, Jia
    Pan, Shirui
    Li, Zhao
    Wang, Jilong
    Yu, Philip S.
    [J]. IEEE Transactions on Knowledge and Data Engineering, 2024, 36 (11) : 6344 - 6357
  • [8] Towards More Practical Adversarial Attacks on Graph Neural Networks
    Ma, Jiaqi
    Ding, Shuangrui
    Mei, Qiaozhu
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [9] Robust Heterogeneous Graph Neural Networks against Adversarial Attacks
    Zhang, Mengmei
    Wang, Xiao
    Zhu, Meiqi
    Shi, Chuan
    Zhang, Zhiqiang
    Zhou, Jun
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 4363 - 4370
  • [10] GNNGUARD: Defending Graph Neural Networks against Adversarial Attacks
    Zhang, Xiang
    Zitnik, Marinka
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33