MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients

被引:50
|
作者
Cao, Xiaoyu [1 ]
Gong, Neil Zhenqiang [1 ]
机构
[1] Duke Univ, Durham, NC 27706 USA
基金
美国国家科学基金会;
关键词
D O I
10.1109/CVPRW56347.2022.00383
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Existing model poisoning attacks to federated learning assume that an attacker has access to a large fraction of compromised genuine clients. However, such assumption is not realistic in production federated learning systems that involve millions of clients. In this work, we propose the first Model Poisoning Attack based on Fake clients called MPAF. Specifically, we assume the attacker injects fake clients to a federated learning system and sends carefully crafted fake local model updates to the cloud server during training, such that the learnt global model has low accuracy for many indiscriminate test inputs. Towards this goal, our attack drags the global model towards an attacker-chosen base model that has low accuracy. Specifically, in each round of federated learning, the fake clients craft fake local model updates that point to the base model and scale them up to amplify their impact before sending them to the cloud server. Our experiments show that MPAF can significantly decrease the test accuracy of the global model, even if classical defenses and norm clipping are adopted, highlighting the need for more advanced defenses.
引用
收藏
页码:3395 / 3403
页数:9
相关论文
共 50 条
  • [41] Defense against local model poisoning attacks to byzantine-robust federated learning
    Shiwei Lu
    Ruihu Li
    Xuan Chen
    Yuena Ma
    Frontiers of Computer Science, 2022, 16
  • [42] A comprehensive analysis of model poisoning attacks in federated learning for autonomous vehicles: A benchmark study
    Almutairi, Suzan
    Barnawi, Ahmed
    RESULTS IN ENGINEERING, 2024, 24
  • [43] DeSMP: Differential Privacy-exploited Stealthy Model Poisoning Attacks in Federated Learning
    Hossain, Md Tamjid
    Islam, Shafkat
    Badsha, Shahriar
    Shen, Haoting
    2021 17TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING (MSN 2021), 2021, : 167 - 174
  • [44] On the Performance Impact of Poisoning Attacks on Load Forecasting in Federated Learning
    Qureshi, Naik Bakht Sania
    Kim, Dong-Hoon
    Lee, Jiwoo
    Lee, Eun-Kyu
    UBICOMP/ISWC '21 ADJUNCT: PROCEEDINGS OF THE 2021 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING AND PROCEEDINGS OF THE 2021 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS, 2021, : 64 - 66
  • [45] Dynamic defense against byzantine poisoning attacks in federated learning
    Rodriguez-Barroso, Nuria
    Martinez-Camara, Eugenio
    Victoria Luzon, M.
    Herrera, Francisco
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2022, 133 : 1 - 9
  • [46] Federated Learning: A Comparative Study of Defenses Against Poisoning Attacks
    Carvalho, Ines
    Huff, Kenton
    Gruenwald, Le
    Bernardino, Jorge
    APPLIED SCIENCES-BASEL, 2024, 14 (22):
  • [47] FLCert: Provably Secure Federated Learning Against Poisoning Attacks
    Cao, Xiaoyu
    Zhang, Zaixi
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 3691 - 3705
  • [48] Clean-label poisoning attacks on federated learning for IoT
    Yang, Jie
    Zheng, Jun
    Baker, Thar
    Tang, Shuai
    Tan, Yu-an
    Zhang, Quanxin
    EXPERT SYSTEMS, 2023, 40 (05)
  • [49] DPFLA: Defending Private Federated Learning Against Poisoning Attacks
    Feng, Xia
    Cheng, Wenhao
    Cao, Chunjie
    Wang, Liangmin
    Sheng, Victor S.
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (04) : 1480 - 1491
  • [50] Privacy-Preserving Detection of Poisoning Attacks in Federated Learning
    Muhr, Trent
    Zhang, Wensheng
    2022 19TH ANNUAL INTERNATIONAL CONFERENCE ON PRIVACY, SECURITY & TRUST (PST), 2022,