Over the past several decades, fractal geometry has found widespread application in the theoretical and experimental sciences to describe the patterns and processes of nature. The defining features of a fractal object (or process) are self-similarity and scale-invariance; that is, the same pattern of complexity is present regardless of scale. These features imply that fractal objects have an infinite level of detail, and therefore require an infinite sample size for their proper characterization. In practice, operational algorithms for measuring the fractal dimension D of natural objects necessarily utilize a finite sample size of points (or equivalently, finite resolution of a path, boundary trace or other image). This gives rise to a paradox in empirical dimension estimation: the object whose fractal dimension is to be estimated must first be approximated as a finite sample in Euclidean embedding space (e.g., points on a plane). This finite sample is then used to obtain an approximation of the true (but unknown) fractal dimension. While many researchers have recognized the problem of estimating fractal dimension from a finite sample, none have addressed the theoretical relationship between sample size and the reliability of dimension estimates based on box counting. In this paper, a theoretical probability-based model is developed to examine this relationship. Using the model, it is demonstrated that very large sample sizes – typically, one to many orders of magnitude greater than those used in most empirical studies – are required for reliable dimension estimation. The required sample size increases exponentially with D, and a 10D increase in sampling effort is required for each decadal (order of magnitude) increase in the scaling range over which dimension is reliably estimated. It is also shown that dimension estimates are unreliable for box counts exceeding one-tenth the sample size.