It has been argued that estimating the spectral density function of a stationary stochastic process at the zero frequency ( or the so-called long-run variance) is an ill-posed problem so that any estimate will have an infinite minimax risk (e.g., Potscher 2002). Most often it is a nuisance parameter that is present in the limit distribution of some statistic and one then needs an estimate of it to obtain test statistics that have a pivotal distribution. In this context, we argue that such an impossibility result is irrelevant. We show that, in the presence of the discontinuities that cause the ill-posedness of the estimation problem for the long-run variance, using the true value of the spectral density function at frequency zero leads to tests that have either 0 or 100% size and, hence, lead to confidence intervals that are completely uninformative. On the other hand, tests based on standard estimates of the long-run variance will have well defined limit distributions and, accordingly, be more informative.