MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients

被引:50
|
作者
Cao, Xiaoyu [1 ]
Gong, Neil Zhenqiang [1 ]
机构
[1] Duke Univ, Durham, NC 27706 USA
基金
美国国家科学基金会;
关键词
D O I
10.1109/CVPRW56347.2022.00383
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Existing model poisoning attacks to federated learning assume that an attacker has access to a large fraction of compromised genuine clients. However, such assumption is not realistic in production federated learning systems that involve millions of clients. In this work, we propose the first Model Poisoning Attack based on Fake clients called MPAF. Specifically, we assume the attacker injects fake clients to a federated learning system and sends carefully crafted fake local model updates to the cloud server during training, such that the learnt global model has low accuracy for many indiscriminate test inputs. Towards this goal, our attack drags the global model towards an attacker-chosen base model that has low accuracy. Specifically, in each round of federated learning, the fake clients craft fake local model updates that point to the base model and scale them up to amplify their impact before sending them to the cloud server. Our experiments show that MPAF can significantly decrease the test accuracy of the global model, even if classical defenses and norm clipping are adopted, highlighting the need for more advanced defenses.
引用
收藏
页码:3395 / 3403
页数:9
相关论文
共 50 条
  • [1] FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients
    Zhang, Zaixi
    Cao, Xiaoyu
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 2545 - 2555
  • [2] On the Analysis of Model Poisoning Attacks against Blockchain-based Federated Learning
    Olapojoye, Rukayat
    Baza, Mohamed
    Salman, Tara
    2024 IEEE 21ST CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE, CCNC, 2024, : 943 - 949
  • [3] Perception Poisoning Attacks in Federated Learning
    Chow, Ka-Ho
    Liu, Ling
    2021 THIRD IEEE INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS AND APPLICATIONS (TPS-ISA 2021), 2021, : 146 - 155
  • [4] Poisoning Attacks in Federated Learning: A Survey
    Xia, Geming
    Chen, Jian
    Yu, Chaodong
    Ma, Jun
    IEEE ACCESS, 2023, 11 : 10708 - 10722
  • [5] Mitigating Poisoning Attacks in Federated Learning
    Ganjoo, Romit
    Ganjoo, Mehak
    Patil, Madhura
    INNOVATIVE DATA COMMUNICATION TECHNOLOGIES AND APPLICATION, ICIDCA 2021, 2022, 96 : 687 - 699
  • [6] FedEqual: Defending Model Poisoning Attacks in Heterogeneous Federated Learning
    Chen, Ling-Yuan
    Chiu, Te-Chuan
    Pang, Ai-Chun
    Cheng, Li-Chen
    2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [7] SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification
    Panda, Ashwinee
    Mahloujifar, Saeed
    Bhagoji, Arjun N.
    Chakraborty, Supriyo
    Mittal, Prateek
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
  • [8] FedATM: Adaptive Trimmed Mean based Federated Learning against Model Poisoning Attacks
    Nishimoto, Kenji
    Chiang, Yi-Han
    Lin, Hai
    Jit, Yusheng
    2023 IEEE 97TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2023-SPRING, 2023,
  • [9] Romoa: Robust Model Aggregation for the Resistance of Federated Learning to Model Poisoning Attacks
    Mao, Yunlong
    Yuan, Xinyu
    Zhao, Xinyang
    Zhong, Sheng
    COMPUTER SECURITY - ESORICS 2021, PT I, 2021, 12972 : 476 - 496
  • [10] Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
    Fang, Minghong
    Cao, Xiaoyu
    Jia, Jinyuan
    Gong, Neil Nenqiang
    PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, 2020, : 1623 - 1640