Pretesting is generally agreed to be an indispensable stage in survey questionnaire development, yet we know little about how well different pretesting methods identify particular types of problems. This study compared four pretesting methods using a single questionnaire in repeated trials of each. The four methods were conventional pretests, behavior coding, cognitive interviews, and expert panels. We developed a model-based coding scheme that classified problems as respondent-semantic, respondent-task, interviewer-task, or analysis. On average, expert panels were most productive in the number of problems identified. Conventional pretesting and behavior coding were the only methods to identify significant numbers of interviewer problems. By contrast, expert panels and cognitive interviews were the only methods to diagnose a nontrivial number of analysis problems. Expert panels and behavior coding were more consistent than the other methods in the numbers of problems identified across trials, as well as in their distribution of problem types. From the vantage point of the particular problems identified, behavior coding was the most reliable method. Costs of the conventional pretests and behavior coding were about the same, cognitive interviews were somewhat less expensive, and expert panels were considerably cheaper.