Consider the problem of estimating mu, based on the observation of Y0, Y1,...,Y(n), where it is assumed only that Y0, Y1,...,Y(kappa) iid N(mu, sigma2) for some unknown kappa. Unlike the traditional change-point problem, the focus here is not on estimating kappa, which is now a nuisance parameter. When it is known that kappa = k, the sample mean Y(k)BAR = SIGMA0(k)Y(i)/(k + 1), provides, in addition to wonderful efficiency properties, safety in the sense that it is minimax under squared error loss. Unfortunately, this safety breaks down when kappa is unknown; indeed if k > kappa, the risk of Y(k)BAR is unbounded. To address this problem, a generalized minimax criterion is considered whereby each estimator is evaluated by its maximum risk under Y0, Y1,...,Y(kappa) iid N(mu, sigma2) for each possible value of kappa. An essentially complete class under this criterion is obtained. Generalizations to other situations such as variance estimation are illustrated.