The Impact of Data Distribution on Fairness and Robustness in Federated Learning

被引:2
|
作者
Ozdayi, Mustafa Safa [1 ]
Kantarcioglu, Murat [1 ]
机构
[1] Univ Texas Dallas, Dept Comp Sci, Richardson, TX 75083 USA
关键词
Federated Learning; Algorithmic Fairness; Adversarial Machine Learning;
D O I
10.1109/TPSISA52974.2021.00022
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated Learning (FL) is a distributed machine learning protocol that allows a set of agents to collaboratively train a model without sharing their datasets. This makes FL particularly suitable for settings where data privacy is desired. However, it has been observed that the performance of FL is closely related to the similarity of the local data distributions of agents. Particularly, as the data distributions of agents differ, the accuracy of the trained models drop. In this work, we look at how variations in local data distributions affect the fairness and the robustness properties of the trained models in addition to the accuracy. Our experimental results indicate that, the trained models exhibit higher bias, and become more susceptible to attacks as local data distributions differ. Importantly, the degradation in the fairness, and robustness can be much more severe than the accuracy. Therefore, we reveal that small variations that have little impact on the accuracy could still be important if the trained model is to be deployed in a fairness/security critical context.
引用
收藏
页码:191 / 196
页数:6
相关论文
共 50 条
  • [21] Trustworthy Machine Learning: Fairness and Robustness
    Liu, Haochen
    WSDM'22: PROCEEDINGS OF THE FIFTEENTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, 2022, : 1553 - 1554
  • [22] Visually Analysing the Fairness of Clustered Federated Learning with Non-IID Data
    Huang, Li
    Cui, Weiwei
    Zhu, Bin
    Zhang, Haidong
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [23] RoFL: Robustness of Secure Federated Learning
    Lycklama, Hidde
    Burkhalter, Lukas
    Viand, Alexander
    Kuchler, Nicolas
    Hithnawi, Anwar
    2023 IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP, 2023, : 453 - 476
  • [24] FedAVE: Adaptive data value evaluation framework for collaborative fairness in federated learning
    Wang, Zihui
    Peng, Zhaopeng
    Fan, Xiaoliang
    Wang, Zheng
    Wu, Shangbin
    Yu, Rongshan
    Yang, Peizhen
    Zheng, Chuanpan
    Wang, Cheng
    NEUROCOMPUTING, 2024, 574
  • [25] Delving into the Adversarial Robustness of Federated Learning
    Zhang, Jie
    Li, Bo
    Chen, Chen
    Lyu, Lingjuan
    Wu, Shuang
    Ding, Shouhong
    Wu, Chao
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 9, 2023, : 11245 - 11253
  • [26] Auditable Federated Learning With Byzantine Robustness
    Liang, Yihuai
    Li, Yan
    Shin, Byeong-Seok
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024, 11 (06): : 8191 - 8203
  • [27] ON THE BYZANTINE ROBUSTNESS OF CLUSTERED FEDERATED LEARNING
    Sattler, Felix
    Mueller, Klaus-Robert
    Wiegand, Thomas
    Samek, Wojciech
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 8861 - 8865
  • [28] A Federated Framework for Edge Computing Devices with Collaborative Fairness and Adversarial Robustness
    Hailin Yang
    Yanhong Huang
    Jianqi Shi
    Yang Yang
    Journal of Grid Computing, 2023, 21
  • [29] A Federated Framework for Edge Computing Devices with Collaborative Fairness and Adversarial Robustness
    Yang, Hailin
    Huang, Yanhong
    Shi, Jianqi
    Yang, Yang
    JOURNAL OF GRID COMPUTING, 2023, 21 (03)
  • [30] Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning
    Nanda, Vedant
    Dooley, Samuel
    Singla, Sahil
    Feizi, Soheil
    Dickerson, John P.
    PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, : 466 - 477