This article investigates how accurately experts (underwriters) and lay persons (university students) judge the risks posed by life-threatening events. Only one prior study (Slovic, Fischhoff, & Lichtenstein, 1985) has previously investigated the veracity of expert versus lay judgments of the magnitude of risk. In that study, a heterogeneous grouping of 15 experts was found to judge, using marginal estimations, a variety of risks as closer to the true annual frequencies of death than convenience samples of the lay population. In this study, we use a larger, homogenous sample of experts per forming an ecologically valid task. We also ask our respondents to assess frequencies and relative frequencies directly, rather than ask for a "risk" estimate-a response mode subject to possible qualitative attributions-as was done in the Slovic et al. study. Although we find that the experts outperformed lay persons on a number of measures, the differences are small, End both groups showed similar global biases in terms of: (1) overestimating the likelihood of dying from a condition (marginal probability) and of dying from a condition given that it happens to you (conditional probability), and (2) underestimating the ratios of marginal and conditional likelihoods between pairs of potentially lethal events. In spite of these scaling problems, both groups showed quite good performance in ordering the lethal events in terms A marginal and conditional likelihoods. We discuss the nature of expertise using a framework developed by Bolger and Wright (1994), and consider whether the commonsense assumption of the superiority of expert risk assessors in making magnitude judgments of risk is. in fact, sensible.