FedEqual: Defending Model Poisoning Attacks in Heterogeneous Federated Learning

被引:8
|
作者
Chen, Ling-Yuan [1 ]
Chiu, Te-Chuan [2 ]
Pang, Ai-Chun [1 ,2 ,3 ]
Cheng, Li-Chen [1 ]
机构
[1] Natl Taiwan Univ, Dept Comp Sci & Informat Engn, Taipei, Taiwan
[2] Acad Sinica, Res Ctr Informat Technol Innovat, Taipei, Taiwan
[3] Natl Taiwan Univ, Grad Inst Networking & Multimedia, Taipei, Taiwan
关键词
Edge AI; Federated Learning; Model Poisoning Attacks; Model Security; System Robustness;
D O I
10.1109/GLOBECOM46510.2021.9685082
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the upcoming edge AI, federated learning (FL) is a privacy-preserving framework to meet the General Data Protection Regulation (GDPR). Unfortunately, FL is vulnerable to an up-to-date security threat, model poisoning attacks. By successfully replacing the global model with the targeted poisoned model, malicious end devices can trigger backdoor attacks and manipulate the whole learning process. The traditional researches under a homogeneous environment can ideally exclude the outliers with scarce side-effects on model performance. However, in privacy-preserving FL, each end device possibly owns a few data classes and different amounts of data, forming into a substantial heterogeneous environment where outliers could be malicious or benign. To achieve the system performance and robustness of FL's framework, we should not assertively remove any local model from the global model updating procedure. Therefore, in this paper, we propose a defending strategy called FedEqual to mitigate model poisoning attacks while preserving the learning task's performance without excluding any benign models. The results show that FedEqual outperforms other state-of-the-art baselines under different heterogeneous environments based on reproduced up-to-date model poisoning attacks.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] CONTRA: Defending Against Poisoning Attacks in Federated Learning
    Awan, Sana
    Luo, Bo
    Li, Fengjun
    COMPUTER SECURITY - ESORICS 2021, PT I, 2021, 12972 : 455 - 475
  • [2] Defending Against Targeted Poisoning Attacks in Federated Learning
    Erbil, Pinar
    Gursoy, M. Emre
    2022 IEEE 4TH INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS, AND APPLICATIONS, TPS-ISA, 2022, : 198 - 207
  • [3] Defending Against Poisoning Attacks in Federated Learning with Blockchain
    Dong N.
    Wang Z.
    Sun J.
    Kampffmeyer M.
    Knottenbelt W.
    Xing E.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (07): : 1 - 13
  • [4] DPFLA: Defending Private Federated Learning Against Poisoning Attacks
    Feng, Xia
    Cheng, Wenhao
    Cao, Chunjie
    Wang, Liangmin
    Sheng, Victor S.
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (04) : 1480 - 1491
  • [5] Defending Against Data Poisoning Attacks: From Distributed Learning to Federated Learning
    Tian, Yuchen
    Zhang, Weizhe
    Simpson, Andrew
    Liu, Yang
    Jiang, Zoe Lin
    COMPUTER JOURNAL, 2023, 66 (03): : 711 - 726
  • [6] Defending against Poisoning Backdoor Attacks on Federated Meta-learning
    Chen, Chien-Lun
    Babakniya, Sara
    Paolieri, Marco
    Golubchik, Leana
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2022, 13 (05)
  • [7] DeFL: Defending against Model Poisoning Attacks in Federated Learning via Critical Learning Periods Awareness
    Yan, Gang
    Wang, Hao
    Yuan, Xu
    Li, Jian
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 9, 2023, : 10711 - 10719
  • [8] FLARE: Defending Federated Learning against Model Poisoning Attacks via Latent Space Representations
    Wang, Ning
    Xiao, Yang
    Chen, Yimin
    Hu, Yang
    Lou, Wenjing
    Hou, Y. Thomas
    ASIA CCS'22: PROCEEDINGS OF THE 2022 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2022, : 946 - 958
  • [9] FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients
    Zhang, Zaixi
    Cao, Xiaoyu
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 2545 - 2555
  • [10] FLARE: Defending Federated Learning against Model Poisoning Attacks via Latent Space Representations
    Wang, Ning
    Xiao, Yang
    Chen, Yimin
    Hu, Yang
    Lou, Wenjing
    Hou, Y. Thomas
    ASIA CCS 2022 - Proceedings of the 2022 ACM Asia Conference on Computer and Communications Security, 2022, : 946 - 958