FedEqual: Defending Model Poisoning Attacks in Heterogeneous Federated Learning

被引:8
|
作者
Chen, Ling-Yuan [1 ]
Chiu, Te-Chuan [2 ]
Pang, Ai-Chun [1 ,2 ,3 ]
Cheng, Li-Chen [1 ]
机构
[1] Natl Taiwan Univ, Dept Comp Sci & Informat Engn, Taipei, Taiwan
[2] Acad Sinica, Res Ctr Informat Technol Innovat, Taipei, Taiwan
[3] Natl Taiwan Univ, Grad Inst Networking & Multimedia, Taipei, Taiwan
来源
2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM) | 2021年
关键词
Edge AI; Federated Learning; Model Poisoning Attacks; Model Security; System Robustness;
D O I
10.1109/GLOBECOM46510.2021.9685082
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the upcoming edge AI, federated learning (FL) is a privacy-preserving framework to meet the General Data Protection Regulation (GDPR). Unfortunately, FL is vulnerable to an up-to-date security threat, model poisoning attacks. By successfully replacing the global model with the targeted poisoned model, malicious end devices can trigger backdoor attacks and manipulate the whole learning process. The traditional researches under a homogeneous environment can ideally exclude the outliers with scarce side-effects on model performance. However, in privacy-preserving FL, each end device possibly owns a few data classes and different amounts of data, forming into a substantial heterogeneous environment where outliers could be malicious or benign. To achieve the system performance and robustness of FL's framework, we should not assertively remove any local model from the global model updating procedure. Therefore, in this paper, we propose a defending strategy called FedEqual to mitigate model poisoning attacks while preserving the learning task's performance without excluding any benign models. The results show that FedEqual outperforms other state-of-the-art baselines under different heterogeneous environments based on reproduced up-to-date model poisoning attacks.
引用
收藏
页数:6
相关论文
共 50 条
  • [41] Sine: Similarity is Not Enough for Mitigating Local Model Poisoning Attacks in Federated Learning
    Kasyap, Harsh
    Tripathy, Somanath
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (05) : 4481 - 4494
  • [42] RECESS Vaccine for Federated Learning: Proactive Defense Against Model Poisoning Attacks
    Yan, Haonan
    Zhang, Wenjing
    Chen, Qian
    Li, Xiaoguang
    Sun, Wenhai
    Li, Hui
    Lin, Xiaodong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [43] Moat: Model Agnostic Defense against Targeted Poisoning Attacks in Federated Learning
    Manna, Arpan
    Kasyap, Harsh
    Tripathy, Somanath
    INFORMATION AND COMMUNICATIONS SECURITY (ICICS 2021), PT I, 2021, 12918 : 38 - 55
  • [44] On the Analysis of Model Poisoning Attacks against Blockchain-based Federated Learning
    Olapojoye, Rukayat
    Baza, Mohamed
    Salman, Tara
    2024 IEEE 21ST CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE, CCNC, 2024, : 943 - 949
  • [45] ShieldFL: Mitigating Model Poisoning Attacks in Privacy-Preserving Federated Learning
    Ma, Zhuoran
    Ma, Jianfeng
    Miao, Yinbin
    Li, Yingjiu
    Deng, Robert H.
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 1639 - 1654
  • [46] Resilience of Wireless Ad Hoc Federated Learning against Model Poisoning Attacks
    Tezuka, Naoya
    Ochiai, Hideya
    Sun, Yuwei
    Esaki, Hiroshi
    2022 IEEE 4TH INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS, AND APPLICATIONS, TPS-ISA, 2022, : 168 - 177
  • [47] A Robust and Efficient Federated Learning Algorithm Against Adaptive Model Poisoning Attacks
    Yang, Han
    Gu, Dongbing
    He, Jianhua
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (09): : 16289 - 16302
  • [48] A Differentially Private Federated Learning Model Against Poisoning Attacks in Edge Computing
    Zhou, Jun
    Wu, Nan
    Wang, Yisong
    Gu, Shouzhen
    Cao, Zhenfu
    Dong, Xiaolei
    Choo, Kim-Kwang Raymond
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (03) : 1941 - 1958
  • [49] A Federated Weighted Learning Algorithm Against Poisoning Attacks
    Yafei Ning
    Zirui Zhang
    Hu Li
    Yuhan Xia
    Ming Li
    International Journal of Computational Intelligence Systems, 18 (1)
  • [50] Data Poisoning Attacks Against Federated Learning Systems
    Tolpegin, Vale
    Truex, Stacey
    Gursoy, Mehmet Emre
    Liu, Ling
    COMPUTER SECURITY - ESORICS 2020, PT I, 2020, 12308 : 480 - 501