A recent probabilistic model unified findings on sequential generalization ("rule learning") via independently-motivated principles of generalization (Frank and Tenenbaum, 2011). Endress critiques this work, arguing that learners do not prefer more specific hypotheses (a central assumption of the model), that "common-sense psychology" provides an adequate explanation of rule learning, and that Bayesian models imply incorrect optimality claims but can be fit to any pattern of data. Endress's response raises useful points about the importance of mechanistic explanation, but the specific critiques of our work are not supported. More broadly, I argue that Endress undervalues the importance of formal models. Although probabilistic models must meet a high standard to be used as evidence for optimality claims, they nevertheless provide a powerful framework for describing cognition. (C) 2013 Elsevier B.V. All rights reserved.