FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping

被引:208
|
作者
Cao, Xiaoyu [1 ]
Fang, Minghong [2 ]
Liu, Jia [2 ]
Gong, Neil Zhenqiang [1 ]
机构
[1] Duke Univ, Durham, NC 27706 USA
[2] Ohio State Univ, Columbus, OH 43210 USA
基金
美国国家科学基金会;
关键词
D O I
10.14722/ndss.2021.24434
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Byzantine-robust federated learning aims to enable a service provider to learn an accurate global model when a bounded number of clients are malicious. The key idea of existing Byzantine-robust federated learning methods is that the service provider performs statistical analysis among the clients' local model updates and removes suspicious ones, before aggregating them to update the global model. However, malicious clients can still corrupt the global models in these methods via sending carefully crafted local model updates to the service provider. The fundamental reason is that there is no root of trust in existing federated learning methods, i.e., from the service provider's perspective, every client could be malicious. In this work, we bridge the gap via proposing FLTrust, a new federated learning method in which the service provider itself bootstraps trust. In particular, the service provider itself collects a clean small training dataset (called root dataset) for the learning task and the service provider maintains a model (called server model) based on it to bootstrap trust. In each iteration, the service provider first assigns a trust score to each local model update from the clients, where a local model update has a lower trust score if its direction deviates more from the direction of the server model update. Then, the service provider normalizes the magnitudes of the local model updates such that they lie in the same hyper-sphere as the server model update in the vector space. Our normalization limits the impact of malicious local model updates with large magnitudes. Finally, the service provider computes the average of the normalized local model updates weighted by their trust scores as a global model update, which is used to update the global model. Our extensive evaluations on six datasets from different domains show that our FLTrust is secure against both existing attacks and strong adaptive attacks. For instance, using a root dataset with less than 100 examples, FLTrust under adaptive attacks with 40%-60% of malicious clients can still train global models that are as accurate as the global models trained by FedAvg under no attacks, where FedAvg is a popular method in non-adversarial settings.
引用
收藏
页数:18
相关论文
共 50 条
  • [1] SIREN: Byzantine-robust Federated Learning via Proactive Alarming
    Guo, Hanxi
    Wang, Hao
    Song, Tao
    Hua, Yang
    Lv, Zhangcheng
    Jin, Xiulang
    Xue, Zhengui
    Ma, Ruhui
    Guan, Haibing
    PROCEEDINGS OF THE 2021 ACM SYMPOSIUM ON CLOUD COMPUTING (SOCC '21), 2021, : 47 - 60
  • [2] Byzantine-robust Federated Learning via Cosine Similarity Aggregation
    Zhu, Tengteng
    Guo, Zehua
    Yao, Chao
    Tan, Jiaxin
    Dou, Songshi
    Wang, Wenrun
    Han, Zhenzhen
    COMPUTER NETWORKS, 2024, 254
  • [3] Byzantine-Robust Aggregation for Federated Learning with Reinforcement Learning
    Yan, Sizheng
    Du, Junping
    Xue, Zhe
    Li, Ang
    WEB AND BIG DATA, APWEB-WAIM 2024, PT IV, 2024, 14964 : 152 - 166
  • [4] AFLGuard: Byzantine-robust Asynchronous Federated Learning
    Fang, Minghong
    Liu, Jia
    Gong, Neil Zhenqiang
    Bentley, Elizabeth S.
    PROCEEDINGS OF THE 38TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2022, 2022, : 632 - 646
  • [5] Differentially Private Byzantine-Robust Federated Learning
    Ma, Xu
    Sun, Xiaoqian
    Wu, Yuduo
    Liu, Zheli
    Chen, Xiaofeng
    Dong, Changyu
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (12) : 3690 - 3701
  • [6] FLGuard: Byzantine-Robust Federated Learning via Ensemble of Contrastive Models
    Lee, Younghan
    Cho, Yungi
    Han, Woorim
    Bae, Ho
    Paek, Yunheung
    COMPUTER SECURITY - ESORICS 2023, PT IV, 2024, 14347 : 65 - 84
  • [7] Better Safe Than Sorry: Constructing Byzantine-Robust Federated Learning with Synthesized Trust
    Geng, Gangchao
    Cai, Tianyang
    Yang, Zheng
    ELECTRONICS, 2023, 12 (13)
  • [8] FedSuper: A Byzantine-Robust Federated Learning Under Supervision
    Zhao, Ping
    Jiang, Jin
    Zhang, Guanglin
    ACM TRANSACTIONS ON SENSOR NETWORKS, 2024, 20 (02)
  • [9] Byzantine-robust federated learning with ensemble incentive mechanism
    Zhao, Shihai
    Pu, Juncheng
    Fu, Xiaodong
    Liu, Li
    Dai, Fei
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 159 : 272 - 283
  • [10] CareFL: Contribution Guided Byzantine-Robust Federated Learning
    Dong, Qihao
    Yang, Shengyuan
    Dai, Zhiyang
    Gao, Yansong
    Wang, Shang
    Cao, Yuan
    Fu, Anmin
    Susilo, Willy
    IEEE Transactions on Information Forensics and Security, 2024, 19 : 9714 - 9729