Towards Defense Against Adversarial Attacks on Graph Neural Networks via Calibrated Co-Training

被引:0
|
作者
Xu-Gang Wu
Hui-Jun Wu
Xu Zhou
Xiang Zhao
Kai Lu
机构
[1] National University of Defense Technology,College of Computer
[2] National University of Defense Technology,College of Systems Engineering
关键词
adversarial defense; graph neural network; multi-view; co-training;
D O I
暂无
中图分类号
学科分类号
摘要
Graph neural networks (GNNs) have achieved significant success in graph representation learning. Nevertheless, the recent work indicates that current GNNs are vulnerable to adversarial perturbations, in particular structural perturbations. This, therefore, narrows the application of GNN models in real-world scenarios. Such vulnerability can be attributed to the model’s excessive reliance on incomplete data views (e.g., graph convolutional networks (GCNs) heavily rely on graph structures to make predictions). By integrating the information from multiple perspectives, this problem can be effectively addressed, and typical views of graphs include the node feature view and the graph structure view. In this paper, we propose C2oG, which combines these two typical views to train sub-models and fuses their knowledge through co-training. Due to the orthogonality of the views, sub-models in the feature view tend to be robust against the perturbations targeted at sub-models in the structure view. C2oG allows sub-models to correct one another mutually and thus enhance the robustness of their ensembles. In our evaluations, C2oG significantly improves the robustness of graph models against adversarial attacks without sacrificing their performance on clean datasets.
引用
收藏
页码:1161 / 1175
页数:14
相关论文
共 50 条
  • [41] Robust Regularization Design of Graph Neural Networks Against Adversarial Attacks Based on Lyapunov Theory
    Yan, Wenjie
    Li, Ziqi
    Qi, Yongjun
    [J]. CHINESE JOURNAL OF ELECTRONICS, 2024, 33 (03) : 732 - 741
  • [42] Robust Regularization Design of Graph Neural Networks Against Adversarial Attacks Based on Lyapunov Theory
    Wenjie YAN
    Ziqi LI
    Yongjun QI
    [J]. Chinese Journal of Electronics, 2024, 33 (03) : 732 - 741
  • [43] Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training
    Zhang, Haichao
    Wang, Jianyu
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [44] GenCo: Generative Co-training for Generative Adversarial Networks with Limited Data
    Cui, Kaiwen
    Huang, Jiaxing
    Luo, Zhipeng
    Zhang, Gongjie
    Zhan, Fangneng
    Lu, Shijian
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 499 - 507
  • [45] Adversarial Attacks on Graph Neural Networks via Node Injections: A Hierarchical Reinforcement Learning Approach
    Sun, Yiwei
    Wang, Suhang
    Tang, Xianfeng
    Hsieh, Tsung-Yu
    Honavar, Vasant
    [J]. WEB CONFERENCE 2020: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2020), 2020, : 673 - 683
  • [46] Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks
    Wang, Jianyu
    Zhang, Haichao
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 6628 - 6637
  • [47] Defending Against Adversarial Attack Towards Deep Neural Networks Via Collaborative Multi-Task Training
    Wang, Derui
    Li, Chaoran
    Wen, Sheng
    Nepal, Surya
    Xiang, Yang
    [J]. IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2022, 19 (02) : 953 - 965
  • [48] Pairwise Gaussian Graph Convolutional Networks: Defense Against Graph Adversarial Attack
    Lu, Guangxi
    Xiong, Zuobin
    Meng, Jing
    Li, Wei
    [J]. 2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 4371 - 4376
  • [49] Adversarial Attacks with Defense Mechanisms on Convolutional Neural Networks and Recurrent Neural Networks for Malware Classification
    Alzaidy, Sharoug
    Binsalleeh, Hamad
    [J]. APPLIED SCIENCES-BASEL, 2024, 14 (04):
  • [50] A robust defense for spiking neural networks against adversarial examples via input filtering
    Guo, Shasha
    Wang, Lei
    Yang, Zhijie
    Lu, Yuliang
    [J]. JOURNAL OF SYSTEMS ARCHITECTURE, 2024, 153