The problem of estimating parameters θ which determine the mean μ(θ) of a Gaussian-distributed observation X is considered. It is noted that the maximum likelihood (ML) estimate – in this case the least squares estimate, has desirable statistical properties, but can be difficult to compute when μ(θ) is a nonlinear function of θ. An estimate, formed by combining ML estimates based on subsections of the data vector X, is proposed as a computationally inexpensive alternative. The main result of the paper is that this alternative estimate, termed here the divide and conquer (DAC) estimate, has ML performance in the small-error region when the data vector X is appropriately subdivided. As an example application, an inexpensive range-difference-based position estimator is derived, and shown via Monte-Carlo simulation to have small-error-region mean square error equal to the Cramér-Rao lower bound (CRB). © 1990 IEEE