The purpose of this work is to provide theoretical foundations of, as well as some computational aspects on, a theory for analysing decisions under risk, when the available information is vague and imprecise. Many approaches to model unprecise information, e.g., by using interval methods, have prevailed. However, such representation models are unnecessarily restrictive since they do not admit discrimination between beliefs in different values, i.e., the epistemologically possible values have equal weights. In many situations, for instance, when the underlying information results from learning techniques based on variance analyses of statistical data, the expressibility must be extended for a more perceptive treatment of the decision situation. Our contribution herein is an approach for enabling a refinement of the representation model, allowing for an elaborated discrimination of possible values by using belief distributions with weak restrictions. We show how to derive admissible classes of local distributions from sets of global distributions and introduce measures expressing into which extent explicit local distributions can be used for modelling decision situations. As will turn out, this results in a theory that has very attractive features from a computational viewpoint.