Efficient, Private and Robust Federated Learning

被引:24
|
作者
Hao, Meng [1 ]
Li, Hongwei [1 ]
Xu, Guowen [2 ]
Chen, Hanxiao [1 ]
Zhang, Tianwei [2 ]
机构
[1] Univ Elect Sci & Technol China, Chengdu, Sichuan, Peoples R China
[2] Nanyang Technol Univ, Singapore, Singapore
基金
中国国家自然科学基金;
关键词
Federated learning; Privacy protection; Byzantine robustness;
D O I
10.1145/3485832.3488014
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Federated learning (FL) has demonstrated tremendous success in various mission-critical large-scale scenarios. However, such promising distributed learning paradigm is still vulnerable to privacy inference and byzantine attacks. The former aims to infer the privacy of target participants involved in training, while the latter focuses on destroying the integrity of the constructed model. To mitigate the above two issues, a few works recently explored unified solutions by utilizing generic secure computation techniques and common byzantine-robust aggregation rules, but there are two major limitations: 1) they suffer from impracticality due to efficiency bottlenecks, and 2) they are still vulnerable to various types of attacks because of model incomprehensiveness. To approach the above problems, in this paper, we present SecureFL, an efficient, private and byzantine-robust FL framework. SecureFL follows the state-of-the-art byzantine-robust FL method (FLTrust NDSS'21), which performs comprehensive byzantine defense by normalizing the updates' magnitude and measuring directional similarity, adapting it to the privacy-preserving context. More importantly, we carefully customize a series of cryptographic components. First, we design a crypto-friendly validity checking protocol that functionally replaces the normalization operation in FLTrust, and further devise tailored cryptographic protocols on top of it. Benefiting from the above optimizations, the communication and computation costs are reduced by half without sacrificing the robustness and privacy protection. Second, we develop a novel preprocessing technique for costly matrix multiplication. With this technique, the directional similarity measurement can be evaluated securely with negligible computation overhead and zero communication cost. Extensive evaluations conducted on three real-world datasets and various neural network architectures demonstrate that SecureFL outperforms prior art up to two orders of magnitude in efficiency with state-of-the-art byzantine robustness.
引用
收藏
页码:45 / 60
页数:16
相关论文
共 50 条
  • [1] Communication-Efficient and Byzantine-Robust Differentially Private Federated Learning
    Li, Min
    Xiao, Di
    Liang, Jia
    Huang, Hui
    [J]. IEEE COMMUNICATIONS LETTERS, 2022, 26 (08) : 1725 - 1729
  • [2] Efficient Private Federated Submodel Learning
    Vithana, Sajani
    Ulukus, Sennur
    [J]. IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 3394 - 3399
  • [3] Distributionally Robust Federated Learning for Differentially Private Data
    Shi, Siping
    Hu, Chuang
    Wang, Dan
    Zhu, Yifei
    Han, Zhu
    [J]. 2022 IEEE 42ND INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2022), 2022, : 842 - 852
  • [4] Differentially Private Byzantine-Robust Federated Learning
    Ma, Xu
    Sun, Xiaoqian
    Wu, Yuduo
    Liu, Zheli
    Chen, Xiaofeng
    Dong, Changyu
    [J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2022, 33 (12) : 3690 - 3701
  • [5] HDFL: Private and Robust Federated Learning using Hyperdimensional Computing
    Kasyap, Harsh
    Tripathy, Somanath
    Conti, Mauro
    [J]. 2023 IEEE 22ND INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, BIGDATASE, CSE, EUC, ISCI 2023, 2024, : 214 - 221
  • [6] Deep Efficient Private Neighbor Generation for Subgraph Federated Learning
    Zhang, Ke
    Sun, Lichao
    Ding, Bolin
    Yiu, Siu Ming
    Yang, Carl
    [J]. PROCEEDINGS OF THE 2024 SIAM INTERNATIONAL CONFERENCE ON DATA MINING, SDM, 2024, : 806 - 814
  • [7] FedSeC: a Robust Differential Private Federated Learning Framework in Heterogeneous Networks
    Gao, Zhipeng
    Duan, Yingwen
    Yang, Yang
    Rui, Lanlan
    Zhao, Chen
    [J]. 2022 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2022, : 1868 - 1873
  • [8] CSRA: Robust Incentive Mechanism Design for Differentially Private Federated Learning
    Yang, Yunchao
    Hu, Miao
    Zhou, Yipeng
    Liu, Xuezheng
    Wu, Di
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 892 - 906
  • [9] Differentially Private and Heterogeneity-Robust Federated Learning With Theoretical Guarantee
    Wang, Xiuhua
    Wang, Shuai
    Li, Yiwei
    Fan, Fengrui
    Li, Shikang
    Lin, Xiaodong
    [J]. IEEE Transactions on Artificial Intelligence, 2024, 5 (12): : 6369 - 6384
  • [10] SAFEFL: MPC-friendly Framework for Private and Robust Federated Learning
    Gehlhar, Till
    Marx, Felix
    Schneider, Thomas
    Suresh, Ajith
    Wehrle, Tobias
    Yalame, Hossein
    [J]. 2023 IEEE SECURITY AND PRIVACY WORKSHOPS, SPW, 2023, : 69 - 76