Linewidths and line shapes are key criteria determining the utility of a nuclear magnetic resonance (NMR) spectrum, and considerable effort is usually devoted to shimming the magnetic field (B-0) to ensure optimum resolution. However, even if the external Field is almost perfectly homogeneous, the sample itself can induce gradients owing to susceptibility effects from its overall shape or internal heterogeneity. Thus, magnetic field gradients nearly always contribute significantly to the linewidths and shapes in an NMR spectrum. Reference deconvolution is a technique which uses the shape of a single resonance line to measure the actual frequency distribution produced by the local B-0 inhomogeneity and then deconvolves that distribution from the whole spectrum. It is a simple linear process which requires no prior knowledge of the number of lines, their intensities, or their relaxation characteristics. No fitting procedures are used. This article reviews the reference deconvolution method, demonstrates its application to one-dimensional NMR spectroscopy, and discusses the tradeoffs between resolution and signal/noise. (C) 2000 John Wiley & Sons, Inc.