A great advantage of imprecise probability models over models based on precise, traditional probabilities is the potential to reflect the amount of knowledge they stand for. Consequently, imprecise probability models promise to offer a vivid tool for handling situations of prior-data conflict in (generalized) Bayesian inference. In this paper we consider a general class of recently studied imprecise probability models, including the Imprecise Dirichlet Model under prior information, and more generally the framework of Quaeghebeur and de Cooman for imprecise inference in canonical exponential families. We demonstrate that such models, in their originally proposed form, prove to be insensitive to the extent of prior-data conflict. We propose an extension reestablishing the natural relationship between knowledge and imprecision: The higher the discrepancy between the observed sample and what was expected from prior knowledge, the higher the imprecision in the posterior, producing cautious inferences if, and only if, caution is needed. Our approach is illustrated by some examples and simulation results.