FedEqual: Defending Model Poisoning Attacks in Heterogeneous Federated Learning

被引:8
|
作者
Chen, Ling-Yuan [1 ]
Chiu, Te-Chuan [2 ]
Pang, Ai-Chun [1 ,2 ,3 ]
Cheng, Li-Chen [1 ]
机构
[1] Natl Taiwan Univ, Dept Comp Sci & Informat Engn, Taipei, Taiwan
[2] Acad Sinica, Res Ctr Informat Technol Innovat, Taipei, Taiwan
[3] Natl Taiwan Univ, Grad Inst Networking & Multimedia, Taipei, Taiwan
来源
2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM) | 2021年
关键词
Edge AI; Federated Learning; Model Poisoning Attacks; Model Security; System Robustness;
D O I
10.1109/GLOBECOM46510.2021.9685082
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the upcoming edge AI, federated learning (FL) is a privacy-preserving framework to meet the General Data Protection Regulation (GDPR). Unfortunately, FL is vulnerable to an up-to-date security threat, model poisoning attacks. By successfully replacing the global model with the targeted poisoned model, malicious end devices can trigger backdoor attacks and manipulate the whole learning process. The traditional researches under a homogeneous environment can ideally exclude the outliers with scarce side-effects on model performance. However, in privacy-preserving FL, each end device possibly owns a few data classes and different amounts of data, forming into a substantial heterogeneous environment where outliers could be malicious or benign. To achieve the system performance and robustness of FL's framework, we should not assertively remove any local model from the global model updating procedure. Therefore, in this paper, we propose a defending strategy called FedEqual to mitigate model poisoning attacks while preserving the learning task's performance without excluding any benign models. The results show that FedEqual outperforms other state-of-the-art baselines under different heterogeneous environments based on reproduced up-to-date model poisoning attacks.
引用
收藏
页数:6
相关论文
共 50 条
  • [21] MATFL: Defending Against Synergetic Attacks in Federated Learning
    Yang, Wen
    Peng, Luyao
    Tang, Xiangyun
    Weng, Yu
    2023 IEEE INTERNATIONAL CONFERENCES ON INTERNET OF THINGS, ITHINGS IEEE GREEN COMPUTING AND COMMUNICATIONS, GREENCOM IEEE CYBER, PHYSICAL AND SOCIAL COMPUTING, CPSCOM IEEE SMART DATA, SMARTDATA AND IEEE CONGRESS ON CYBERMATICS,CYBERMATICS, 2024, : 313 - 319
  • [22] Defending Against Byzantine Attacks in Quantum Federated Learning
    Xia, Qi
    Tao, Zeyi
    Li, Qun
    2021 17TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING (MSN 2021), 2021, : 145 - 152
  • [23] MATFL: Defending Against Synergetic Attacks in Federated Learning
    Yang, Wen
    Peng, Luyao
    Tang, Xiangyun
    Weng, Yu
    Proceedings - IEEE Congress on Cybermatics: 2023 IEEE International Conferences on Internet of Things, iThings 2023, IEEE Green Computing and Communications, GreenCom 2023, IEEE Cyber, Physical and Social Computing, CPSCom 2023 and IEEE Smart Data, SmartData 2023, 2023, : 313 - 319
  • [24] Romoa: Robust Model Aggregation for the Resistance of Federated Learning to Model Poisoning Attacks
    Mao, Yunlong
    Yuan, Xinyu
    Zhao, Xinyang
    Zhong, Sheng
    COMPUTER SECURITY - ESORICS 2021, PT I, 2021, 12972 : 476 - 496
  • [25] Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
    Fang, Minghong
    Cao, Xiaoyu
    Jia, Jinyuan
    Gong, Neil Nenqiang
    PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, 2020, : 1623 - 1640
  • [26] MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients
    Cao, Xiaoyu
    Gong, Neil Zhenqiang
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 3395 - 3403
  • [27] DeMAC: Towards detecting model poisoning attacks in federated learning system
    Yang, Han
    Gu, Dongbing
    He, Jianhua
    INTERNET OF THINGS, 2023, 23
  • [28] Defense Strategies Toward Model Poisoning Attacks in Federated Learning: A Survey
    Wang, Zhilin
    Kang, Qiao
    Zhang, Xinyi
    Hu, Qin
    2022 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2022, : 548 - 553
  • [29] Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning
    Shejwalkar, Virat
    Houmansadr, Amir
    28TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2021), 2021,
  • [30] Data Poisoning Attacks on Federated Machine Learning
    Sun, Gan
    Cong, Yang
    Dong, Jiahua
    Wang, Qiang
    Lyu, Lingjuan
    Liu, Ji
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (13) : 11365 - 11375