The most common Monte Carlo methods for sensitivity analysis of stochastic reaction networks are the finite difference (FD), Girsanov transformation (GT), and regularized pathwise derivative (RPD) methods. It has been numerically observed in the literature that the biased FD and RPD methods tend to have lower variance than the unbiased GT method and that centering the GT method (CGT) reduces its variance. We provide a theoretical justification for these observations in terms of system size asymptotic analysis under what is known as the classical scaling. Our analysis applies to GT, CGT, and FD and shows that the standard deviations of their estimators when normalized by the actual sensitivity scale as O(N-1/2), O(1), and O(N-1/2), respectively, as system size N ->infinity. In the case of the FD methods, the N ->infinity asymptotics are obtained keeping the finite difference perturbation h fixed. Our numerical examples verify that our order estimates are sharp and that the variance of the RPD method scales similarly to the FD methods. We combine our large N asymptotics with previously known small h asymptotics to obtain the best choice of h in terms of N and estimate the number N-s of simulations required to achieve a prescribed relative L-2 error delta. This shows that N-s depends on delta and N as (delta(-2-)r2/r1 N-1,delta(-2), and N delta(-2) for FD, CGT, and GT, respectively. Here gamma(1) > 0,gamma(2) > 0 depend on the type of FD method used.