The effect of the choice of maternal age-specific prevalence curve on the model predicted Down syndrome detection rate was examined. All 19 published regression curves from 11 birth prevalence series in four meta-analyses were included. The detection rate for a five per cent false-positive rate was estimated for three combinations of markers. For free beta human chorionic gonadotropin and alpha-fetoprotein the lowest predicted detection rate was 62.3 per cent and the highest 64.1 per cent, a range of 1.8 per cent. When unconjugated oestriol was added as a third marker it was 65.6-67.3 per cent, a 1.7 per cent range, and when inhibin A was the fourth marker the detection rate was 72.0-73.4 per cent, a 1.4 per cent range. The number of series included in the regression had the biggest effect: when the authors had used a subset thought to have the highest ascertainment the predicted detection rate generally increased. The type of regression equation used and restrictions on the age range over which the regression was performed were less important factors. The effect of the choice of curve on the predicted increase in detection achieved by incorporating additional markers was relatively small: 3.1-3.3 per cent for unconjugated oestriol and a further 6.1-6.5 per cent for inhibin A. This analysis shows that the model inaccuracy caused by the maternal age curve is not small but is unlikely to be large enough to influence Down syndrome screening policy decisions. (C) 1998 John Wiley & Sons, Ltd.