Robust verification of graph contrastive learning based on node feature adversarial attacks

被引:0
|
作者
Xing Y. [1 ]
Wang X. [2 ]
Shi C. [1 ]
Huang H. [1 ]
Cui P. [3 ]
机构
[1] School of Computer, Beijing University of Posts and Telecommunications, Beijing
[2] School of Software, Beihang University, Beijing
[3] Department of Computer Science and Technology, Tsinghua University, Beijing
关键词
adversarial attack; adversarial training; Graph contrastive learning; Graph convolution network; Robustness verification;
D O I
10.16511/j.cnki.qhdxxb.2024.21.002
中图分类号
学科分类号
摘要
[Objective] Many recent studies have indicated that graph neural networks exhibit a lack of robustness when facing adversarial attacks involving perturbations in both graph structures and node features, and the subsequent predictions of these networks may become unreliable under such circumstances. This issue affects graph contrastive learning methods similarly. However, the existing evaluation of robustness methods is often entangled with attack algorithms, data labels, and downstream tasks, which are best avoided, especially within the self-supervised setup of graph contrastive learning. Therefore, this paper introduces a robustness verification algorithm for graph contrastive learning to assess the robustness of graph convolutional networks against node feature adversarial attacks.[Methods] To begin with, considering the nature of positive and negative pairs found in graph contrastive learning models, this paper defines the robustness verification problem of graph contrastive learning as a similarity comparison between adversarial samples and the target node along with its negative samples. This problem is then expressed as a dynamic programming problem, which avoids dependency on attack algorithms, data labels, and downstream tasks. To address this dynamic programming problem, a series of novel and effective methods are proposed in this paper. For the binary attributes commonly used in graph data, corresponding perturbation spaces are therefore constructed here. Considering the challenge posed by a large negative sample space in graph contrastive learning, a negative sample sampling strategy is designed to improve the efficiency of problem-solving. In cases where binary discrete attributes and nonlinear activation functions render the dynamic programming problem difficult to address, this paper employs relaxation techniques and uses dual problem optimization methods to further improve the solution's efficiency.[Results] To assess the effectiveness of the proposed graph contrastive learning robustness verification algorithm, we conducted experiments using the classic GRACE model on the Cora and CiteSeer datasets. Employing the robustness verification algorithm introduced for graph contrastive learning, we evaluated its robustness. As the perturbation intensity increased, the proportion of nodes that were verified as robust decreased rapidly, and the proportion of nodes that were verified as non-robust increased significantly. Simultaneously, the proportion of unverifiable nodes remained at a lower level. These observations show the effectiveness of the proposed framework for verifying the robustness of graph contrastive learning. Additionally, the experiments revealed that the robustness of the contrastive learning model ARIEL, designed against specific attack algorithms, lacks generalizability and exhibits poor verifiable robustness performance, suggesting its vulnerability to other attack algorithms. Besides, ablation experiments identified the adversarial attack components of ARIEL as the main reason for its diminished verifiable robustness. Lastly, parameter experiments demonstrated the reasonability of the proposed negative sample sampling strategy. The results showed that sampling 20 negative samples is sufficient to achieve a favorable performance of our robustness verification algorithm with high efficiency.[Conclusions] Through the analysis of our methods and experimental results, the graph contrastive learning robustness verification algorithm proposed in this study not only eliminates dependency on attack algorithms, data labels, and downstream tasks but also presents a more comprehensive measurement compared to traditional robustness metrics. It can verify robustness in multiple directions, thereby boosting the development of comprehensively robust graph contrastive learning algorithms. © 2024 Press of Tsinghua University. All rights reserved.
引用
收藏
页码:13 / 24
页数:11
相关论文
共 34 条
  • [1] ABU-EL-HAIJA S, PEROZZI B, KAPOOR A, Et al., Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing, Proceedings of the 36th International Conference on Machine Learning, pp. 21-29, (2019)
  • [2] WU J, HE J R, XU J J., DEMO-Net: Degree-specific graph neural networks for node and graph classification, Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery cy. Data Mining, pp. 406-415, (2019)
  • [3] KIPF T N, WELLING M., Variational graph auto-encoders
  • [4] YOU J X, YING R, LESKOVEC J., Position-aware graph neural networks, Proceedings of the 36th International Conference on Machine Learning, pp. 7134-7143, (2019)
  • [5] GAO H Y, JI S W., Graph u-nets, Proceedings of the 36th International Conference on Machine Learning, pp. 2083-2092, (2019)
  • [6] ZHANG M H, CUI Z C, NEUMANN M, Et al., An end-to-end deep learning architecture for graph classification, Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence, pp. 4438-4445, (2018)
  • [7] VELICKOVIC P, FEDUS W, HAMILTON W L, Et al., Deep graph infomax, 7th International Conference on Learning Representations, (2019)
  • [8] HASSAM K, KHASAHMADI A H., Contrastive multi-view representation learning on graphs, 37th International Conference on Machine Learning, pp. 4116-4126, (2020)
  • [9] ZHU Y Q, XU Y C, YU F, Et al., Deep graph contrastive representation learning, (2020)
  • [10] ZUGNER D, AKBARNEJAD A, GuNNEMANN S., Adversarial attacks on neural networks for graph data, Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery cV. Data Mining, pp. 2847-2856, (2018)