FLOD: Oblivious Defender for Private Byzantine-Robust Federated Learning with Dishonest-Majority

被引:33
|
作者
Dong, Ye [1 ,2 ]
Chen, Xiaojun [1 ,2 ]
Li, Kaiyun [1 ,2 ]
Wang, Dakui [1 ]
Zeng, Shuai [1 ]
机构
[1] Chinese Acad Sci, Inst Informat Engn, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Sch Cyber Secur, Beijing, Peoples R China
来源
关键词
Privacy-preserving; Byzantine-robust; Federated; Learning; Dishonest-majority; FRAMEWORK; EFFICIENT;
D O I
10.1007/978-3-030-88418-5_24
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Privacy and Byzantine-robustness are two major concerns of federated learning (FL), but mitigating both threats simultaneously is highly challenging: privacy-preserving strategies prohibit access to individual model updates to avoid leakage, while Byzantine-robust methods require access for comprehensive mathematical analysis. Besides, most Byzantine-robust methods only work in the honest-majority setting. We present FLOD, a novel oblivious defender for private Byzantinerobust FL in dishonest-majority setting. Basically, we propose a novel Hamming distance-based aggregation method to resist > 1/2 Byzantine attacks using a small root-dataset and server-model for bootstrapping trust. Furthermore, we employ two non-colluding servers and use additive homomorphic encryption (AHE) and secure two-party computation (2PC) primitives to construct efficient privacy-preserving building blocks for secure aggregation, in which we propose two novel in-depth variants of Beaver Multiplication triples (MT) to reduce the overhead of Bit to Arithmetic (Bit2A) conversion and vector weighted sum aggregation (VSWA) significantly. Experiments on real-world and synthetic datasets demonstrate our effectiveness and efficiency: (i) FLOD defeats known Byzantine attacks with a negligible effect on accuracy and convergence, (ii) achieves a reduction of similar to 2x for offline (resp. online) overhead of Bit2A and VSWA compared to ABY-AHE (resp. ABY-MT) based methods (NDSS'15), (iii) and reduces total online communication and run-time by 167-1416x and 3.1-7.4x compared to FLGUARD (Crypto Eprint 2021/025).
引用
收藏
页码:497 / 518
页数:22
相关论文
共 50 条
  • [41] Using Third-Party Auditor to Help Federated Learning: An Efficient Byzantine-Robust Federated Learning
    Zhang, Zhuangzhuang
    Wu, Libing
    He, Debiao
    Li, Jianxin
    Lu, Na
    Wei, Xuejiang
    IEEE TRANSACTIONS ON SUSTAINABLE COMPUTING, 2024, 9 (06): : 848 - 861
  • [42] RSAM: Byzantine-Robust and Secure Model Aggregation in Federated Learning for Internet of Vehicles Using Private Approximate Median
    He, Yuanyuan
    Li, Peizhi
    Ni, Jianbing
    Deng, Xianjun
    Lu, Hongwei
    Zhang, Jie
    Yang, Laurence T.
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (05) : 6714 - 6726
  • [43] Defense against local model poisoning attacks to byzantine-robust federated learning
    Shiwei Lu
    Ruihu Li
    Xuan Chen
    Yuena Ma
    Frontiers of Computer Science, 2022, 16
  • [44] Efficient Byzantine-Robust and Privacy-Preserving Federated Learning on Compressive Domain
    Hu, Guiqiang
    Li, Hongwei
    Fan, Wenshu
    Zhang, Yushu
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (04): : 7116 - 7127
  • [45] Byzantine-Robust Federated Learning via Server-Side Mixtue of Experts
    Li, Jing (lj@ustc.edu.cn), 1600, Springer Science and Business Media Deutschland GmbH (14326 LNAI):
  • [46] Privacy-Preserving Byzantine-Robust Federated Learning via Blockchain Systems
    Miao, Yinbin
    Liu, Ziteng
    Li, Hongwei
    Choo, Kim-Kwang Raymond
    Deng, Robert H.
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 2848 - 2861
  • [47] Privacy-Preserving Byzantine-Robust Federated Learning via Blockchain Systems
    Miao, Yinbin
    Liu, Ziteng
    Li, Hongwei
    Choo, Kim-Kwang Raymond
    Deng, Robert H.
    IEEE Transactions on Information Forensics and Security, 2022, 17 : 2848 - 2861
  • [48] AFL: Attention-based Byzantine-robust Federated Learning with Vector Filter
    Chen, Hao
    Lv, Xixiang
    Zheng, Wei
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 595 - 600
  • [49] FedAegis: Edge-Based Byzantine-Robust Federated Learning for Heterogeneous Data
    Zhou, Fangtong
    Yu, Ruozhou
    Li, Zhouyu
    Gu, Huayue
    Wang, Xiaojian
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 3005 - 3010
  • [50] Defense against local model poisoning attacks to byzantine-robust federated learning
    Lu, Shiwei
    Li, Ruihu
    Chen, Xuan
    Ma, Yuena
    FRONTIERS OF COMPUTER SCIENCE, 2022, 16 (06)