Predictive risk models in the public sector are commonly developed using administrative data that is more complete for subpopulations that more greatly rely on public services. In the United States, for instance, information on health care utilization is routinely available to government agencies for individuals supported by Medicaid and Medicare, but not for the privately insured. Critiques of public sector algorithms have identified such "differential feature underreporting" as a driver of disparities in algorithmic decision-making. Yet this form of data bias remains understudied from a technical viewpoint. While prior work has examined the fairness impacts of additive feature noise and features that are clearly marked as missing, little is known about the setting of data missingness absent indicators (i.e. differential feature under-reporting). In this work, we study an analytically tractable model of differential feature underreporting to characterize the impact of under-report on algorithmic fairness. We demonstrate how standard missing data methods typically fail to mitigate bias in this setting, and propose a new set of augmented loss and imputation methods. Our results show that, in real world data settings, under-reporting typically exacerbates disparities. The proposed solution methods show some success in mitigating disparities attributable to feature under-reporting.