The paper considers the NP-hard calculation problem of the reliability of a network, which elements are subject to accidental failures. A universal method for evaluating various indicators of network reliability is the Monte Carlo method, which can be performed in parallel implementation. In spite of the fact that the exact reliability calculation problem is NP-hard), it is usually possible to perform an accurate calculation of reliability for networks of small dimension (up to hundreds of edges). To this end, the factorization method and various acceleration methods are used, primarily reduction methods, decomposition, the selection strategy for the factorization element, exact formulas for low-dimensional graphs, and others. However, such techniques are not developed for all indicators of network reliability. Most of all such techniques exist for the classical reliability index - the probability of the connectedness of a random graph. Some methods of speeding up the calculation are known for the reliability with diameter constraint, and the mathematical expectation of the number of connected pairs of nodes of the network. The absence of such methods greatly complicates the calculation of the investigated indicator, making its complexity close to the exponential in the number of unreliable elements (communication channels and/or nodes). When calculating the reliability of a network using the Monte Carlo method, it is necessary to randomly generate a certain number of implementations of the network and to average values obtained (connectivity function, the number of connected pairs, etc.), which will be an estimate of the reliability index. Based on the sample variance, one can draw a conclusion about the error of the solution obtained. For some indicators it is possible to estimate the variance depending on the sample size even before the calculation begins and, accordingly, to determine the necessary number of algorithm iterations to achieve the required error of the solution. In this paper, the probability of connectedness of a random graph with unreliable edges is considered as an index of reliability. For this indicator, we propose to generate an implementation of the graph simultaneously with its connectivity check. In addition, for the probabilistic connectivity, it is possible to estimate the variance depending on the sample size even before the calculation begins and determine the necessary number of iterations of the algorithm to achieve the required error in the solution. This approach also greatly simplifies the parallel implementation, eliminating the need to periodically stop all processes and wait for evaluation of the variance. In the case of parallel implementation, it is convenient to split the iteration number between all processes. Further, on each core, the process of generation of the implementations starts. It is assumed that the number of computing cores is large enough, and MPI is used for communication between them. Therefore, one core remains as a master and does not participate in the generation of graph implementations. The rest of the cores send the averaged values to the collector process after completion. As a result, it was shown that the parallel algorithm ideally scales up to 512,000 computing cores, then there is a decrease in the acceleration effect from the number of cores. The solution in this case may be to increase the number of collector agents and their hierarchical organization. It should be noted that the proposed model allows simulating Monte Carlo calculation for other indicators of network reliability, if it is possible to estimate the variance before the calculation begins.