Provably Secure Federated Learning against Malicious Clients

被引:0
|
作者
Cao, Xiaoyu [1 ]
Jia, Jinyuan [1 ]
Gong, Neil Zhenqiang [1 ]
机构
[1] Duke Univ, Durham, NC 27708 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning enables clients to collaboratively learn a shared global model without sharing their local training data with a cloud server. However, malicious clients can corrupt the global model to predict incorrect labels for testing examples. Existing defenses against malicious clients leverage Byzantine-robust federated learning methods. However, these methods cannot provably guarantee that the predicted label for a testing example is not affected by malicious clients. We bridge this gap via ensemble federated learning. In particular, given any base federated learning algorithm, we use the algorithm to learn multiple global models, each of which is learnt using a randomly selected subset of clients. When predicting the label of a testing example, we take majority vote among the global models. We show that our ensemble federated learning with any base federated learning algorithm is provably secure against malicious clients. Specifically, the label predicted by our ensemble global model for a testing example is provably not affected by a bounded number of malicious clients. Moreover, we show that our derived bound is tight. We evaluate our method on MNIST and Human Activity Recognition datasets. For instance, our method can achieve a certified accuracy of 88% on MNIST when 20 out of 1,000 clients are malicious.
引用
收藏
页码:6885 / 6893
页数:9
相关论文
共 50 条
  • [1] Federated learning secure model: A framework for malicious clients detection
    Kolasa, Dominik
    Pilch, Kinga
    Mazurczyk, Wojciech
    [J]. SOFTWAREX, 2024, 27
  • [2] FLCert: Provably Secure Federated Learning Against Poisoning Attacks
    Cao, Xiaoyu
    Zhang, Zaixi
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 3691 - 3705
  • [3] ELSA: Secure Aggregation for Federated Learning with Malicious Actors
    Rathee, Mayank
    Shen, Conghao
    Wagh, Sameer
    Popa, Raluca Ada
    [J]. 2023 IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP, 2023, : 1961 - 1979
  • [4] FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients
    Zhang, Zaixi
    Cao, Xiaoyu
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    [J]. PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 2545 - 2555
  • [5] A Flexible and Scalable Malicious Secure Aggregation Protocol for Federated Learning
    Tang, Jinling
    Xu, Haixia
    Wang, Mingsheng
    Tang, Tao
    Peng, Chunying
    Liao, Huimei
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 4174 - 4187
  • [6] SIMC 2.0: Improved Secure ML Inference Against Malicious Clients
    Xu, Guowen
    Han, Xingshuo
    Zhang, Tianwei
    Xu, Shengmin
    Ning, Jianting
    Huang, Xinyi
    Li, Hongwei
    Deng, Robert H.
    [J]. IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (04) : 1708 - 1723
  • [7] How to cope with malicious federated learning clients: An unsupervised learning-based approach
    Onsu, Murat Arda
    Kantarci, Burak
    Boukerche, Azzedine
    [J]. COMPUTER NETWORKS, 2023, 234
  • [8] Provably Secure Decisions Based on Potentially Malicious Information
    Wang, Dongxia
    Muller, Tim
    Sun, Jun
    [J]. IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (05) : 4388 - 4403
  • [9] RoFL: A Robust Federated Learning Scheme Against Malicious Attacks
    Wei, Ming
    Liu, Xiaofan
    Ren, Wei
    [J]. WEB AND BIG DATA, PT III, APWEB-WAIM 2022, 2023, 13423 : 277 - 291
  • [10] Trustworthy Federated Learning Against Malicious Attacks in Web 3.0
    Yuan, Zheng
    Tian, Youliang
    Zhou, Zhou
    Li, Ta
    Wang, Shuai
    Xiong, Jinbo
    [J]. IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2024, 11 (05): : 3969 - 3982