In the geosciences, fine-scale detail of geomorphic surfaces, commonly parameterized as roughness, is growing in importance as a source of information for modeling natural phenomena and classifying features of interest. Terrestrial light detection and ranging (LiDAR) scanning (TLS), now well known to geologists, is a natural choice for collecting geospatial data. While many recent studies have investigated methodologies for estimating surface roughness from point clouds, research on the influence of instrumental bias on those point clouds and the resulting roughness estimates is scant. A scale-dependent bias in TLS range measurements could affect the outcome of studies relying on high-resolution surface morphology. Growing numbers of research applications in geomorphology, neotectonics, and other disciplines seek to measure the roughness of surfaces with local topographic variations (referred to as asperities) on the order of a few centimeters or less in size. These asperities may manifest as bed forms or pebbles in a streambed, or wavy textures on fault-slip surfaces. In order to assess the feasibility of applying TLS point cloud data sets to the task of measuring centimeter-scale surface roughness, we evaluated the relationship between roughness values of dimensionally controlled test targets measured with TLS scans and numerical simulations. We measured and simulated instrument rangefinder noise to estimate its influence on surface roughness measurements, which was found to decrease with increasing real surface roughness. The size of the area sampled by a single point measurement (effective radius) was also estimated. The ratio of the effective radius to the radius of surface asperities was found to correlate with the disparity between measured and expected roughness. Rangefinder noise was found to overestimate expected roughness by up to similar to 5%, and the smoothing effect of the measurement size disparity was found to underestimate expected roughness by up to 20%. Based on these results, it is evident that TLS point cloud geometry is correlated with instrument parameters, scan range, and the morphology of the real surface. As different geological applications of TLS may call for relative or absolute measurements of roughness at widely different scales, the presence of these biases imposes constraints on choice of instrument and scan network design. A general solution for such measurement biases lies in the development of calibration processes for TLS roughness measurement strategies, for which the results of this study establish a theoretical basis.