in many cases when scientists use hypothesis tests they are fully aware that the statistical model itself, and a fortiori the null hypothesis, is only a partial description of reality, and, from a logical point of view, it is not strictly true. In particular, this applies to black-box models-those most frequently used in statistics. From this point of view it does not seem very reasonable to choose between hypotheses in terms of their probability of error or, even worse, their conditional probabilities of error, since these indexes are too rough to be useful. In this paper a wide range of classical hypothesis testing problems are examined from the point of view of model selection: the acceptance or rejection of the null hypothesis of such a problem is considered in terms of the selection of the most appropriate model, when one is nested inside the other. There are several reasons why a (strictly speaking false) model might be developed: it allows us to synthesize data variability, while maintaining a reasonable fit with observed data, and also it facilitates prediction of new phenomena with a certain degree of accuracy. Quite simply, modeling makes reality more readily understandable. In this context, some problems of the classical hypothesis testing approach are discussed and several alternatives are considered, in developing methods, based on statistical estimation theory, for nested-model selection and a wide class of hypothesis testing problems. These methods are applied to various situations and compared with classical techniques. Finally, a number of questions concerning the nature of statistical inference are posed. (C) 2002 Elsevier Science B.V. All rights reserved.