FedEqual: Defending Model Poisoning Attacks in Heterogeneous Federated Learning

被引:8
|
作者
Chen, Ling-Yuan [1 ]
Chiu, Te-Chuan [2 ]
Pang, Ai-Chun [1 ,2 ,3 ]
Cheng, Li-Chen [1 ]
机构
[1] Natl Taiwan Univ, Dept Comp Sci & Informat Engn, Taipei, Taiwan
[2] Acad Sinica, Res Ctr Informat Technol Innovat, Taipei, Taiwan
[3] Natl Taiwan Univ, Grad Inst Networking & Multimedia, Taipei, Taiwan
来源
2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM) | 2021年
关键词
Edge AI; Federated Learning; Model Poisoning Attacks; Model Security; System Robustness;
D O I
10.1109/GLOBECOM46510.2021.9685082
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the upcoming edge AI, federated learning (FL) is a privacy-preserving framework to meet the General Data Protection Regulation (GDPR). Unfortunately, FL is vulnerable to an up-to-date security threat, model poisoning attacks. By successfully replacing the global model with the targeted poisoned model, malicious end devices can trigger backdoor attacks and manipulate the whole learning process. The traditional researches under a homogeneous environment can ideally exclude the outliers with scarce side-effects on model performance. However, in privacy-preserving FL, each end device possibly owns a few data classes and different amounts of data, forming into a substantial heterogeneous environment where outliers could be malicious or benign. To achieve the system performance and robustness of FL's framework, we should not assertively remove any local model from the global model updating procedure. Therefore, in this paper, we propose a defending strategy called FedEqual to mitigate model poisoning attacks while preserving the learning task's performance without excluding any benign models. The results show that FedEqual outperforms other state-of-the-art baselines under different heterogeneous environments based on reproduced up-to-date model poisoning attacks.
引用
收藏
页数:6
相关论文
共 50 条
  • [31] Fair Detection of Poisoning Attacks in Federated Learning
    Singh, Ashneet Khandpur
    Blanco-Justicia, Alberto
    Domingo-Ferrer, Josep
    Sanchez, David
    Rebollo-Monedero, David
    2020 IEEE 32ND INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI), 2020, : 224 - 229
  • [32] APFed: Anti-Poisoning Attacks in Privacy-Preserving Heterogeneous Federated Learning
    Chen, Xiao
    Yu, Haining
    Jia, Xiaohua
    Yu, Xiangzhan
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 5749 - 5761
  • [33] Adversarial Poisoning Attacks on Federated Learning in Metaverse
    Aristodemou, Marios
    Liu, Xiaolan
    Lambotharan, Sangarapillai
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 6312 - 6317
  • [34] Defending against model poisoning attack in federated learning: A variance-minimization approach
    Xu, Hairuo
    Shu, Tao
    JOURNAL OF INFORMATION SECURITY AND APPLICATIONS, 2024, 82
  • [35] Mitigating Model Poisoning Attacks on Distributed Learning with Heterogeneous Data
    Xu, Jialt
    Yang, Guangfeng
    Zheng, Ziyan
    Huang, Shao-Lun
    22ND IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA 2023, 2023, : 738 - 743
  • [36] DEFENDING AGAINST BACKDOOR ATTACKS IN FEDERATED LEARNING WITH DIFFERENTIAL PRIVACY
    Miao, Lu
    Yang, Wei
    Hu, Rong
    Li, Lu
    Huang, Liusheng
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2999 - 3003
  • [37] RoPE: Defending against backdoor attacks in federated learning systems
    Wang, Yongkang
    Zhai, Di-Hua
    Xia, Yuanqing
    KNOWLEDGE-BASED SYSTEMS, 2024, 293
  • [38] FedPD: Defending federated prototype learning against backdoor attacks
    Tan, Zhou
    Cai, Jianping
    Li, De
    Lian, Puwei
    Liu, Ximeng
    Che, Yan
    NEURAL NETWORKS, 2025, 184
  • [39] FMDL: Federated Mutual Distillation Learning for Defending Backdoor Attacks
    Sun, Hanqi
    Zhu, Wanquan
    Sun, Ziyu
    Cao, Mingsheng
    Liu, Wenbin
    ELECTRONICS, 2023, 12 (23)
  • [40] A Robust Privacy-Preserving Federated Learning Model Against Model Poisoning Attacks
    Yazdinejad, Abbas
    Dehghantanha, Ali
    Karimipour, Hadis
    Srivastava, Gautam
    Parizi, Reza M.
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 6693 - 6708