Algorithmic Fairness of Machine Learning Models for Alzheimer Disease Progression

被引:3
|
作者
Yuan, Chenxi [1 ,2 ]
Linn, Kristin A. [1 ,2 ]
Hubbard, Rebecca A. [1 ]
机构
[1] Univ Penn, Perelman Sch Med, Dept Biostat Epidemiol & Informat, 423 Guardian Dr, Philadelphia, PA 19146 USA
[2] Univ Penn, Perelman Sch Med, Dept Biostat Epidemiol & Informat, Penn Stat Imaging & Visualizat Endeavor, Philadelphia, PA 19104 USA
基金
美国国家卫生研究院; 加拿大健康研究院;
关键词
MILD COGNITIVE IMPAIRMENT; CLASSIFICATION; PREDICTION; HEALTH; MRI;
D O I
10.1001/jamanetworkopen.2023.42203
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
IMPORTANCE Predictive models using machine learning techniques have potential to improve early detection and management of Alzheimer disease (AD). However, these models potentially have biases and may perpetuate or exacerbate existing disparities. OBJECTIVE To characterize the algorithmic fairness of longitudinal prediction models for AD progression. DESIGN, SETTING, AND PARTICIPANTS This prognostic study investigated the algorithmic fairness of logistic regression, support vector machines, and recurrent neural networks for predicting progression to mild cognitive impairment (MCI) and AD using data from participants in the Alzheimer Disease Neuroimaging Initiative evaluated at 57 sites in the US and Canada. Participants aged 54 to 91 years who contributed data on at least 2 visits between September 2005 and May 2017 were included. Data were analyzed in October 2022. EXPOSURES Fairness was quantified across sex, ethnicity, and race groups. Neuropsychological test scores, anatomical features from T1 magnetic resonance imaging, measures extracted from positron emission tomography, and cerebrospinal fluid biomarkers were included as predictors. MAIN OUTCOMES AND MEASURES Outcome measures quantified fairness of prediction models (logistic regression [LR], support vector machine [SVM], and recurrent neural network [RNN] models), including equal opportunity, equalized odds, and demographic parity. Specifically, if the model exhibited equal sensitivity for all groups, it aligned with the principle of equal opportunity, indicating fairness in predictive performance. RESULTS A total of 1730 participants in the cohort (mean [SD] age, 73.81 [6.92] years; 776 females [44.9%]; 69 Hispanic [4.0%] and 1661 non-Hispanic [96.0%]; 29 Asian [1.7%], 77 Black [4.5%], 1599 White [92.4%], and 25 other race [1.4%]) were included. Sensitivity for predicting progression to MCI and ADwas lower for Hispanic participants compared with non-Hispanic participants; the difference (SD) in true positive rate ranged from 20.9%(5.5%) for the RNN model to 27.8%(9.8%) for the SVM model in MCI and 24.1%(5.4%) for the RNN model to 48.2%(17.3%) for the LR model in AD. Sensitivity was similarly lower for Black and Asian participants compared with non-Hispanic White participants; for example, the difference (SD) in AD true positive rate was 14.5%(51.6%) in the LR model, 12.3%(35.1%) in the SVM model, and 28.4%(16.8%) in the RNN model for Black vs White participants, and the difference (SD) in MCI true positive rate was 25.6%(13.1%) in the LR model, 24.3%(13.1%) in the SVM model, and 6.8%(18.7%) in the RNN model for Asian vs White participants. Models generally satisfied metrics of fairness with respect to sex, with no significant differences by group, except for cognitively normal (CN)-MCI and MCI-AD transitions (eg, an absolute increase [SD] in the true positive rate of CN-MCI transitions of 10.3%[27.8%] for the LR model). CONCLUSIONS AND RELEVANCE In this study, models were accurate in aggregate but failed to satisfy fairness metrics. These findings suggest that fairness should be considered in the development and use of machine learning models for AD progression.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Machine learning and algorithmic fairness in public and population health
    Vishwali Mhasawade
    Yuan Zhao
    Rumi Chunara
    [J]. Nature Machine Intelligence, 2021, 3 : 659 - 666
  • [2] Machine learning and algorithmic fairness in public and population health
    Mhasawade, Vishwali
    Zhao, Yuan
    Chunara, Rumi
    [J]. NATURE MACHINE INTELLIGENCE, 2021, 3 (08) : 659 - 666
  • [3] Bridging Machine Learning and Mechanism Design towards Algorithmic Fairness
    Finocchiaro, Jessie
    Maio, Roland
    Monachou, Faidra
    Patro, Gourab K.
    Raghavan, Manish
    Stoica, Ana-Andreea
    Tsirtsis, Stratis
    [J]. PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, : 489 - 503
  • [4] Fairness in machine learning with tractable models
    Varley, Michael
    Belle, Vaishak
    [J]. KNOWLEDGE-BASED SYSTEMS, 2021, 215
  • [5] Machine learning for comprehensive forecasting of Alzheimer’s Disease progression
    Charles K. Fisher
    Aaron M. Smith
    Jonathan R. Walsh
    [J]. Scientific Reports, 9
  • [6] Machine learning for comprehensive forecasting of Alzheimer's Disease progression
    Fisher, Charles K.
    Smith, Aaron M.
    Walsh, Jonathan R.
    Simone, Adam J.
    Edgar, Chris
    Jack, Clifford R.
    Holtzman, David
    Russell, David
    Hill, Derek
    Grosset, Donald
    Wood, Fred
    Vanderstichele, Hugo
    Morris, John
    Blennown, Kaj
    Marek, Ken
    Shaw, Leslie M.
    Albert, Marilyn
    Weiner, Michael
    Fox, Nick
    Aisen, Paul
    Cole, Patricia E.
    Petersen, Ronald
    Sherer, Todd
    Kubick, Wayne
    [J]. SCIENTIFIC REPORTS, 2019, 9 (1)
  • [7] Algorithmic fairness and bias mitigation for clinical machine learning with deep reinforcement learning
    Jenny Yang
    Andrew A. S. Soltan
    David W. Eyre
    David A. Clifton
    [J]. Nature Machine Intelligence, 2023, 5 : 884 - 894
  • [8] Algorithmic fairness and bias mitigation for clinical machine learning with deep reinforcement learning
    Yang, Jenny
    Soltan, Andrew A. S.
    Eyre, David W.
    Clifton, David A.
    [J]. NATURE MACHINE INTELLIGENCE, 2023, 5 (08) : 884 - +
  • [9] Learning Biomarker Models for Progression Estimation of Alzheimer's Disease
    Schmidt-Richberg, Alexander
    Ledig, Christian
    Guerrero, Ricardo
    Molina-Abril, Helena
    Frangi, Alejandro
    Rueckert, Daniel
    [J]. PLOS ONE, 2016, 11 (04):
  • [10] Ethical limitations of algorithmic fairness solutions in health care machine learning
    McCradden, Melissa D.
    Joshi, Shalmali
    Mazwi, Mjaye
    Anderson, James A.
    [J]. LANCET DIGITAL HEALTH, 2020, 2 (05): : E221 - E223