[Objective] Many recent studies have indicated that graph neural networks exhibit a lack of robustness when facing adversarial attacks involving perturbations in both graph structures and node features, and the subsequent predictions of these networks may become unreliable under such circumstances. This issue affects graph contrastive learning methods similarly. However, the existing evaluation of robustness methods is often entangled with attack algorithms, data labels, and downstream tasks, which are best avoided, especially within the self-supervised setup of graph contrastive learning. Therefore, this paper introduces a robustness verification algorithm for graph contrastive learning to assess the robustness of graph convolutional networks against node feature adversarial attacks.[Methods] To begin with, considering the nature of positive and negative pairs found in graph contrastive learning models, this paper defines the robustness verification problem of graph contrastive learning as a similarity comparison between adversarial samples and the target node along with its negative samples. This problem is then expressed as a dynamic programming problem, which avoids dependency on attack algorithms, data labels, and downstream tasks. To address this dynamic programming problem, a series of novel and effective methods are proposed in this paper. For the binary attributes commonly used in graph data, corresponding perturbation spaces are therefore constructed here. Considering the challenge posed by a large negative sample space in graph contrastive learning, a negative sample sampling strategy is designed to improve the efficiency of problem-solving. In cases where binary discrete attributes and nonlinear activation functions render the dynamic programming problem difficult to address, this paper employs relaxation techniques and uses dual problem optimization methods to further improve the solution's efficiency.[Results] To assess the effectiveness of the proposed graph contrastive learning robustness verification algorithm, we conducted experiments using the classic GRACE model on the Cora and CiteSeer datasets. Employing the robustness verification algorithm introduced for graph contrastive learning, we evaluated its robustness. As the perturbation intensity increased, the proportion of nodes that were verified as robust decreased rapidly, and the proportion of nodes that were verified as non-robust increased significantly. Simultaneously, the proportion of unverifiable nodes remained at a lower level. These observations show the effectiveness of the proposed framework for verifying the robustness of graph contrastive learning. Additionally, the experiments revealed that the robustness of the contrastive learning model ARIEL, designed against specific attack algorithms, lacks generalizability and exhibits poor verifiable robustness performance, suggesting its vulnerability to other attack algorithms. Besides, ablation experiments identified the adversarial attack components of ARIEL as the main reason for its diminished verifiable robustness. Lastly, parameter experiments demonstrated the reasonability of the proposed negative sample sampling strategy. The results showed that sampling 20 negative samples is sufficient to achieve a favorable performance of our robustness verification algorithm with high efficiency.[Conclusions] Through the analysis of our methods and experimental results, the graph contrastive learning robustness verification algorithm proposed in this study not only eliminates dependency on attack algorithms, data labels, and downstream tasks but also presents a more comprehensive measurement compared to traditional robustness metrics. It can verify robustness in multiple directions, thereby boosting the development of comprehensively robust graph contrastive learning algorithms. © 2024 Press of Tsinghua University. All rights reserved.