Analyzing the Impact of Personalization on Fairness in Federated Learning for Healthcare

被引:0
|
作者
Wang, Tongnian [1 ]
Zhang, Kai [2 ]
Cai, Jiannan [3 ]
Gong, Yanmin [4 ]
Choo, Kim-Kwang Raymond [1 ]
Guo, Yuanxiong [1 ]
机构
[1] Univ Texas San Antonio, Dept Informat Syst & Cyber Secur, San Antonio, TX 78249 USA
[2] Univ Texas Hlth Sci Ctr Houston, McWilliams Sch Biomed Informat, Houston, TX 77030 USA
[3] Univ Texas San Antonio, Sch Civil & Environm Engn & Construct Management, San Antonio, TX 78249 USA
[4] Univ Texas San Antonio, Dept Elect & Comp Engn, San Antonio, TX 78249 USA
基金
美国国家科学基金会;
关键词
Health disparities; Group fairness; Federated learning; Personalization; Privacy; EXTRACTION; DISEASE;
D O I
10.1007/s41666-024-00164-7
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As machine learning (ML) usage becomes more popular in the healthcare sector, there are also increasing concerns about potential biases and risks such as privacy. One countermeasure is to use federated learning (FL) to support collaborative learning without the need for patient data sharing across different organizations. However, the inherent heterogeneity of data distributions among participating FL parties poses challenges for exploring group fairness in FL. While personalization within FL can handle performance degradation caused by data heterogeneity, its influence on group fairness is not fully investigated. Therefore, the primary focus of this study is to rigorously assess the impact of personalized FL on group fairness in the healthcare domain, offering a comprehensive understanding of how personalized FL affects group fairness in clinical outcomes. We conduct an empirical analysis using two prominent real-world Electronic Health Records (EHR) datasets, namely eICU and MIMIC-IV. Our methodology involves a thorough comparison between personalized FL and two baselines: standalone training, where models are developed independently without FL collaboration, and standard FL, which aims to learn a global model via the FedAvg algorithm. We adopt Ditto as our personalized FL approach, which enables each client in FL to develop its own personalized model through multi-task learning. Our assessment is achieved through a series of evaluations, comparing the predictive performance (i.e., AUROC and AUPRC) and fairness gaps (i.e., EOPP, EOD, and DP) of these methods. Personalized FL demonstrates superior predictive accuracy and fairness over standalone training across both datasets. Nevertheless, in comparison with standard FL, personalized FL shows improved predictive accuracy but does not consistently offer better fairness outcomes. For instance, in the 24-h in-hospital mortality prediction task, personalized FL achieves an average EOD of 27.4% across racial groups in the eICU dataset and 47.8% in MIMIC-IV. In comparison, standard FL records a better EOD of 26.2% for eICU and 42.0% for MIMIC-IV, while standalone training yields significantly worse EOD of 69.4% and 54.7% on these datasets, respectively. Our analysis reveals that personalized FL has the potential to enhance fairness in comparison to standalone training, yet it does not consistently ensure fairness improvements compared to standard FL. Our findings also show that while personalization can improve fairness for more biased hospitals (i.e., hospitals having larger fairness gaps in standalone training), it can exacerbate fairness issues for less biased ones. These insights suggest that the integration of personalized FL with additional strategic designs could be key to simultaneously boosting prediction accuracy and reducing fairness disparities. The findings and opportunities outlined in this paper can inform the research agenda for future studies, to overcome the limitations and further advance health equity research.
引用
收藏
页码:181 / 205
页数:25
相关论文
共 50 条
  • [31] DFFL: A dual fairness framework for federated learning
    Qi, Kaiyue
    Yan, Tongjiang
    Ren, Pengcheng
    Yang, Jianye
    Li, Jialin
    COMPUTER COMMUNICATIONS, 2025, 235
  • [32] Fairness in Federated Learning: Trends, Challenges, and Opportunities
    Mukhtiar, Noorain
    Mahmood, Adnan
    Sheng, Quan Z.
    ADVANCED INTELLIGENT SYSTEMS, 2025,
  • [33] The Current State and Challenges of Fairness in Federated Learning
    Vucinich, Sean
    Zhu, Qiang
    IEEE ACCESS, 2023, 11 : 80903 - 80914
  • [34] On the impact of non-IID data on the performance and fairness of differentially private federated learning
    Amiri, Saba
    Belloum, Adam
    Nalisnick, Eric
    Klous, Sander
    Gommans, Leon
    52ND ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS WORKSHOP VOLUME (DSN-W 2022), 2022, : 52 - 58
  • [35] Adapt to Adaptation: Learning Personalization for Cross-Silo Federated Learning
    Luo, Jun
    Wu, Shandong
    IJCAI International Joint Conference on Artificial Intelligence, 2022, : 2166 - 2173
  • [36] On the Impact of Data Heterogeneity in Federated Learning Environments with Application to Healthcare Networks
    Milasheuski, U.
    Barbieri, L.
    Tedeschini, B. Camajori
    Nicoli, M.
    Savazzi, M. S.
    2024 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI 2024, 2024, : 1017 - 1023
  • [37] On the Impact of Model Compression for Bayesian Federated Learning: An Analysis on Healthcare Data
    Barbieri, Luca
    Savazzi, Stefano
    Nicoli, Monica
    IEEE SIGNAL PROCESSING LETTERS, 2025, 32 : 251 - 255
  • [38] EPFFL: Enhancing Privacy and Fairness in Federated Learning for Distributed E-Healthcare Data Sharing Services
    Liu, Jingwei
    Li, Yating
    Zhao, Mengjiao
    Liu, Lei
    Kumar, Neeraj
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2025, 22 (02) : 1239 - 1252
  • [39] AI Fairness-From Machine Learning to Federated Learning
    Patnaik, Lalit Mohan
    Wang, Wenfeng
    CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2024, 139 (02): : 1203 - 1215
  • [40] Progressive search personalization and privacy protection using federated learning
    Sarkar, Sagnik
    Agrawal, Shaashwat
    Chowdhuri, Aditi
    Ramani, S.
    EXPERT SYSTEMS, 2025, 42 (01)