A variety of model selection criteria have been developed, of general and specific types. Most of these aim at selecting a single model with good overall properties, for example, formulated via average prediction quality or shortest estimated overall distance to the true model. The Akaike, the Bayesian, and the deviance information criteria, along with many suitable variations, are examples of such methods. These methods are not concerned, however, with the actual use of the selected model, which varies with context and application. The present article takes the view that the model selector should instead focus on the parameter singled out for interest; in particular, a model that gives good precision for one estimand may be worse when used for inference for another estimand. We develop a method that, for a given focus parameter, estimates the precision of any submodel-based estimator. The framework is that of large-sample likelihood inference. Using an unbiased estimate of limiting risk, we propose a focused information criterion for model selection. We investigate and discuss properties of the method, establish some connections to Akaike's information criterion, and illustrate its use in a variety of situations.