The Impact of Data Distribution on Fairness and Robustness in Federated Learning

被引:2
|
作者
Ozdayi, Mustafa Safa [1 ]
Kantarcioglu, Murat [1 ]
机构
[1] Univ Texas Dallas, Dept Comp Sci, Richardson, TX 75083 USA
关键词
Federated Learning; Algorithmic Fairness; Adversarial Machine Learning;
D O I
10.1109/TPSISA52974.2021.00022
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated Learning (FL) is a distributed machine learning protocol that allows a set of agents to collaboratively train a model without sharing their datasets. This makes FL particularly suitable for settings where data privacy is desired. However, it has been observed that the performance of FL is closely related to the similarity of the local data distributions of agents. Particularly, as the data distributions of agents differ, the accuracy of the trained models drop. In this work, we look at how variations in local data distributions affect the fairness and the robustness properties of the trained models in addition to the accuracy. Our experimental results indicate that, the trained models exhibit higher bias, and become more susceptible to attacks as local data distributions differ. Importantly, the degradation in the fairness, and robustness can be much more severe than the accuracy. Therefore, we reveal that small variations that have little impact on the accuracy could still be important if the trained model is to be deployed in a fairness/security critical context.
引用
收藏
页码:191 / 196
页数:6
相关论文
共 50 条
  • [41] SFFL: Self-aware fairness federated learning framework for heterogeneous data distributions
    Zhang, Jiale
    Li, Ye
    Wu, Di
    Zhao, Yanchao
    Palaiahnakote, Shivakumara
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 269
  • [42] AI Fairness-From Machine Learning to Federated Learning
    Patnaik, Lalit Mohan
    Wang, Wenfeng
    CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2024, 139 (02): : 1203 - 1215
  • [43] MINDFL: Mitigating the Impact of Imbalanced and Noisy-labeled Data in Federated Learning with Quality and Fairness-Aware Client Selection
    Zhang, Chaoyu
    Wang, Ning
    Shi, Shanghao
    Du, Changlai
    Lou, Wenjing
    Hou, Y. Thomas
    MILCOM 2023 - 2023 IEEE MILITARY COMMUNICATIONS CONFERENCE, 2023,
  • [44] Privacy and Robustness in Federated Learning: Attacks and Defenses
    Lyu, Lingjuan
    Yu, Han
    Ma, Xingjun
    Chen, Chen
    Sun, Lichao
    Zhao, Jun
    Yang, Qiang
    Yu, Philip S.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (07) : 8726 - 8746
  • [45] ENHANCING FEDERATED LEARNING ROBUSTNESS IN WIRELESS NETWORKS
    Shaban, Zubair
    Prasad, Ranjitha
    PROCEEDINGS OF 7TH JOINT INTERNATIONAL CONFERENCE ON DATA SCIENCE AND MANAGEMENT OF DATA, CODS-COMAD 2024, 2024, : 597 - 598
  • [46] Towards the Robustness of Differentially Private Federated Learning
    Qi, Tao
    Wang, Huili
    Huang, Yongfeng
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 18, 2024, : 19911 - 19919
  • [47] Decaf: Data Distribution Decompose Attack Against Federated Learning
    Dai, Zhiyang
    Gao, Yansong
    Zhou, Chunyi
    Fu, Anmin
    Zhang, Zhi
    Xue, Minhui
    Zheng, Yifeng
    Zhang, Yuqing
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 405 - 420
  • [48] On Data Distribution Leakage in Cross-Silo Federated Learning
    Jiang, Yangfan
    Luo, Xinjian
    Wu, Yuncheng
    Zhu, Xiaochen
    Xiao, Xiaokui
    Ooi, Beng Chin
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (07) : 3312 - 3328
  • [49] Federated Learning for Distribution Skewed Data Using Sample Weights
    Nguyen H.
    Wu P.
    Chang J.M.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (06): : 2615 - 2626
  • [50] Optimizing Data Distribution for Federated Learning under Bandwidth Constraint
    Tajiri, Kengo
    Kawahara, Ryoichi
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 3732 - 3737