Algorithmic Fairness of Machine Learning Models for Alzheimer Disease Progression

被引:3
|
作者
Yuan, Chenxi [1 ,2 ]
Linn, Kristin A. [1 ,2 ]
Hubbard, Rebecca A. [1 ]
机构
[1] Univ Penn, Perelman Sch Med, Dept Biostat Epidemiol & Informat, 423 Guardian Dr, Philadelphia, PA 19146 USA
[2] Univ Penn, Perelman Sch Med, Dept Biostat Epidemiol & Informat, Penn Stat Imaging & Visualizat Endeavor, Philadelphia, PA 19104 USA
基金
加拿大健康研究院; 美国国家卫生研究院;
关键词
MILD COGNITIVE IMPAIRMENT; CLASSIFICATION; PREDICTION; HEALTH; MRI;
D O I
10.1001/jamanetworkopen.2023.42203
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
IMPORTANCE Predictive models using machine learning techniques have potential to improve early detection and management of Alzheimer disease (AD). However, these models potentially have biases and may perpetuate or exacerbate existing disparities. OBJECTIVE To characterize the algorithmic fairness of longitudinal prediction models for AD progression. DESIGN, SETTING, AND PARTICIPANTS This prognostic study investigated the algorithmic fairness of logistic regression, support vector machines, and recurrent neural networks for predicting progression to mild cognitive impairment (MCI) and AD using data from participants in the Alzheimer Disease Neuroimaging Initiative evaluated at 57 sites in the US and Canada. Participants aged 54 to 91 years who contributed data on at least 2 visits between September 2005 and May 2017 were included. Data were analyzed in October 2022. EXPOSURES Fairness was quantified across sex, ethnicity, and race groups. Neuropsychological test scores, anatomical features from T1 magnetic resonance imaging, measures extracted from positron emission tomography, and cerebrospinal fluid biomarkers were included as predictors. MAIN OUTCOMES AND MEASURES Outcome measures quantified fairness of prediction models (logistic regression [LR], support vector machine [SVM], and recurrent neural network [RNN] models), including equal opportunity, equalized odds, and demographic parity. Specifically, if the model exhibited equal sensitivity for all groups, it aligned with the principle of equal opportunity, indicating fairness in predictive performance. RESULTS A total of 1730 participants in the cohort (mean [SD] age, 73.81 [6.92] years; 776 females [44.9%]; 69 Hispanic [4.0%] and 1661 non-Hispanic [96.0%]; 29 Asian [1.7%], 77 Black [4.5%], 1599 White [92.4%], and 25 other race [1.4%]) were included. Sensitivity for predicting progression to MCI and ADwas lower for Hispanic participants compared with non-Hispanic participants; the difference (SD) in true positive rate ranged from 20.9%(5.5%) for the RNN model to 27.8%(9.8%) for the SVM model in MCI and 24.1%(5.4%) for the RNN model to 48.2%(17.3%) for the LR model in AD. Sensitivity was similarly lower for Black and Asian participants compared with non-Hispanic White participants; for example, the difference (SD) in AD true positive rate was 14.5%(51.6%) in the LR model, 12.3%(35.1%) in the SVM model, and 28.4%(16.8%) in the RNN model for Black vs White participants, and the difference (SD) in MCI true positive rate was 25.6%(13.1%) in the LR model, 24.3%(13.1%) in the SVM model, and 6.8%(18.7%) in the RNN model for Asian vs White participants. Models generally satisfied metrics of fairness with respect to sex, with no significant differences by group, except for cognitively normal (CN)-MCI and MCI-AD transitions (eg, an absolute increase [SD] in the true positive rate of CN-MCI transitions of 10.3%[27.8%] for the LR model). CONCLUSIONS AND RELEVANCE In this study, models were accurate in aggregate but failed to satisfy fairness metrics. These findings suggest that fairness should be considered in the development and use of machine learning models for AD progression.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] Early-Stage Alzheimer's Disease Prediction Using Machine Learning Models
    Kavitha, C.
    Mani, Vinodhini
    Srividhya, S. R.
    Khalaf, Osamah Ibrahim
    Tavera Romero, Carlos Andres
    [J]. FRONTIERS IN PUBLIC HEALTH, 2022, 10
  • [42] Unsupervised Learning of Disease Progression Models
    Wang, Xiang
    Sontag, David
    Wang, Fei
    [J]. PROCEEDINGS OF THE 20TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING (KDD'14), 2014, : 85 - 94
  • [43] Prediction Chronic Kidney Disease Progression In Diabetic patients using Machine Learning Models
    Apiromrak, Wasawat
    Toh, Chanavee
    Sangthawan, Pornpen
    Ingviya, Thammasin
    [J]. 2024 21ST INTERNATIONAL JOINT CONFERENCE ON COMPUTER SCIENCE AND SOFTWARE ENGINEERING, JCSSE 2024, 2024, : 566 - 573
  • [44] Machine learning models to predict disease progression among veterans with hepatitis C virus
    Konerman, Monica A.
    Beste, Lauren A.
    Van, Tony
    Liu, Boang
    Zhang, Xuefei
    Zhu, Ji
    Saini, Sameer D.
    Su, Grace L.
    Nallamothu, Brahmajee K.
    Ioannou, George N.
    Waljee, Akbar K.
    [J]. PLOS ONE, 2019, 14 (01):
  • [45] Detection of Alzheimer Disease Using Machine Learning
    Bhardwaj, Sumit
    Kaushik, Tarun
    Bisht, Manthan
    Gupta, Punit
    Mundra, Shikha
    [J]. SMART SYSTEMS: INNOVATIONS IN COMPUTING (SSIC 2021), 2022, 235 : 443 - 450
  • [46] Optimizing fairness tradeoffs in machine learning with multiobjective meta-models
    La Cava, William G.
    [J]. PROCEEDINGS OF THE 2023 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, GECCO 2023, 2023, : 511 - 519
  • [47] Wasserstein-based fairness interpretability framework for machine learning models
    Alexey Miroshnikov
    Konstandinos Kotsiopoulos
    Ryan Franks
    Arjun Ravi Kannan
    [J]. Machine Learning, 2022, 111 : 3307 - 3357
  • [48] Evaluating Fairness of Machine Learning Models Under Uncertain and Incomplete Information
    Awasthi, Pranjal
    Beutel, Alex
    Kleindessner, Matthäus
    Morgenstern, Jamie
    Wang, Xuezhi
    [J]. PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, : 206 - 214
  • [49] Wasserstein-based fairness interpretability framework for machine learning models
    Miroshnikov, Alexey
    Kotsiopoulos, Konstandinos
    Franks, Ryan
    Kannan, Arjun Ravi
    [J]. MACHINE LEARNING, 2022, 111 (09) : 3307 - 3357
  • [50] Achieving Outcome Fairness in Machine Learning Models for Social Decision Problems
    Fang, Boli
    Jiang, Miao
    Cheng, Pei-yi
    Shen, Jerry
    Fang, Yi
    [J]. PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 444 - 450