The Akaike (1973, 1974) information criterion, AIC, and the corrected Akaike information criterion (Hurvich and Tsai, 1989), AICc, were both designed as estimators of the expected Kullback-Leibler discrepancy between the model generating the data and a fitted candidate model. AIC is justified in a very general framework, and as a result, offers a crude estimator of the expected discrepancy: one which exhibits a potentially high degree of negative bias in small-sample applications (Hurvich and Tsai, 1989). AICc corrects for this bias, but is less broadly applicable than AIC since its justification depends upon the form of the candidate model (Hurvich and Tsai, 1989, 1993; Hurvich et al., 1990; Bedrick and Tsai, 1994). Although AIC and AICc share the same objective, the derivations of the criteria proceed along very different lines, making it difficult to reconcile how AICc improves upon the approximations leading to AIC. To address this issue, we present a derivation which unifies the justifications of AIC and AICc in the linear regression framework.