When tests consist of multiple-choice and constructed-response items, researchers are confronted with the question of which item response theory (IRT) model combination will appropriately represent the data collected from these mixed-format tests. This simulation study examined the performance of six model selection criteria, including the likelihood ratio test, Akaike's information criterion (AIC), corrected AIC, Bayesian information criterion, Hannon and Quinn's information criterion, and consistent AIC, with respect to correct model selection among a set of three competing mixed-format IRT models (i.e., one-parameter logistic/partial credit [1PL/PC], two-parameter logistic/generalized partial credit [2PL/GPC], and three-parameter logistic/generalized partial credit [3PL/GPC]). The criteria were able to correctly select less parameterized IRT models, including the PC, 1PL, and 1PL/PC models. In contrast, the criteria were less able to correctly select more parameterized IRT models, including the GPC, 3PL, and 3PL/GPC models. Implications of the findings and recommendations are discussed.