Human decision making is accompanied by a sense of confidence. According to Bayesian decision theory, confidence reflects the learned probability of making a correct response, given available data (e.g., accumulated stimulus evidence and response time). Although optimal, independently learning these probabilities for all possible data combinations is computationally intractable. Here, we describe a novel model of confidence implementing a low-dimensional approximation of this optimal yet intractable solution. This model allows efficient estimation of confidence, while at the same time accounting for idiosyncrasies, different kinds of biases and deviation from the optimal probability correct. Our model dissociates confidence biases resulting from the estimate of the reliability of evidence by individuals (captured by parameter alpha), from confidence biases resulting from general stimulus independent under and overconfidence (captured by parameter beta). We provide empirical evidence that this model accurately fits both choice data (accuracy, response time) and trial-by-trial confidence ratings simultaneously. Finally, we test and empirically validate two novel predictions of the model, namely that 1) changes in confidence can be independent of performance and 2) selectively manipulating each parameter of our model leads to distinct patterns of confidence judgments. As a tractable and flexible account of the computation of confidence, our model offers a clear framework to interpret and further resolve different forms of confidence biases. Mathematical and computational work has shown that in order to optimize decision making, humans and other adaptive agents must compute confidence in their perception and actions. Currently, it remains unknown how this confidence is computed. We demonstrate how humans can approximate confidence in a tractable manner. Our computational model makes novel predictions about when confidence will be biased (e.g., over- or underconfidence due to selective environmental feedback). We empirically tested these predictions in a novel experimental paradigm, by providing continuous model-based feedback. We observed that different feedback manipulations elicited distinct patterns of confidence judgments, in ways predicted by the model. Overall, we offer a framework to both interpret optimal confidence and resolve confidence biases that characterize several psychiatric disorders.