FLOD: Oblivious Defender for Private Byzantine-Robust Federated Learning with Dishonest-Majority

被引:33
|
作者
Dong, Ye [1 ,2 ]
Chen, Xiaojun [1 ,2 ]
Li, Kaiyun [1 ,2 ]
Wang, Dakui [1 ]
Zeng, Shuai [1 ]
机构
[1] Chinese Acad Sci, Inst Informat Engn, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Sch Cyber Secur, Beijing, Peoples R China
来源
关键词
Privacy-preserving; Byzantine-robust; Federated; Learning; Dishonest-majority; FRAMEWORK; EFFICIENT;
D O I
10.1007/978-3-030-88418-5_24
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Privacy and Byzantine-robustness are two major concerns of federated learning (FL), but mitigating both threats simultaneously is highly challenging: privacy-preserving strategies prohibit access to individual model updates to avoid leakage, while Byzantine-robust methods require access for comprehensive mathematical analysis. Besides, most Byzantine-robust methods only work in the honest-majority setting. We present FLOD, a novel oblivious defender for private Byzantinerobust FL in dishonest-majority setting. Basically, we propose a novel Hamming distance-based aggregation method to resist > 1/2 Byzantine attacks using a small root-dataset and server-model for bootstrapping trust. Furthermore, we employ two non-colluding servers and use additive homomorphic encryption (AHE) and secure two-party computation (2PC) primitives to construct efficient privacy-preserving building blocks for secure aggregation, in which we propose two novel in-depth variants of Beaver Multiplication triples (MT) to reduce the overhead of Bit to Arithmetic (Bit2A) conversion and vector weighted sum aggregation (VSWA) significantly. Experiments on real-world and synthetic datasets demonstrate our effectiveness and efficiency: (i) FLOD defeats known Byzantine attacks with a negligible effect on accuracy and convergence, (ii) achieves a reduction of similar to 2x for offline (resp. online) overhead of Bit2A and VSWA compared to ABY-AHE (resp. ABY-MT) based methods (NDSS'15), (iii) and reduces total online communication and run-time by 167-1416x and 3.1-7.4x compared to FLGUARD (Crypto Eprint 2021/025).
引用
收藏
页码:497 / 518
页数:22
相关论文
共 50 条
  • [31] Byzantine-Robust and Communication-Efficient Personalized Federated Learning
    Zhang, Jiaojiao
    He, Xuechao
    Huang, Yue
    Ling, Qing
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2025, 73 : 26 - 39
  • [32] Byzantine-Robust and Privacy-Preserving Federated Learning With Irregular Participants
    Chen, Yinuo
    Tan, Wuzheng
    Zhong, Yijian
    Kang, Yulin
    Yang, Anjia
    Weng, Jian
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (21): : 35193 - 35205
  • [33] BRFL: A blockchain-based byzantine-robust federated learning model
    Li, Yang
    Xia, Chunhe
    Li, Chang
    Wang, Tianbo
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2025, 196
  • [34] Byzantine-robust federated learning over Non-IID data
    Ma X.
    Li Q.
    Jiang Q.
    Ma Z.
    Gao S.
    Tian Y.
    Ma J.
    Tongxin Xuebao/Journal on Communications, 2023, 44 (06): : 138 - 153
  • [35] Distance-Statistical based Byzantine-robust algorithms in Federated Learning
    Colosimo, Francesco
    De Rango, Floriano
    2024 IEEE 21ST CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE, CCNC, 2024, : 1034 - 1035
  • [36] Byzantine-robust Federated Learning through Collaborative Malicious Gradient Filtering
    Xu, Jian
    Huang, Shao-Lun
    Song, Linqi
    Lan, Tian
    2022 IEEE 42ND INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2022), 2022, : 1223 - 1235
  • [37] Byzantine-Robust Multimodal Federated Learning Framework for Intelligent Connected Vehicle
    Wu, Ning
    Lin, Xiaoming
    Lu, Jianbin
    Zhang, Fan
    Chen, Weidong
    Tang, Jianlin
    Xiao, Jing
    ELECTRONICS, 2024, 13 (18)
  • [38] FLGuard: Byzantine-Robust Federated Learning via Ensemble of Contrastive Models
    Lee, Younghan
    Cho, Yungi
    Han, Woorim
    Bae, Ho
    Paek, Yunheung
    COMPUTER SECURITY - ESORICS 2023, PT IV, 2024, 14347 : 65 - 84
  • [39] FedInv: Byzantine-Robust Federated Learning by Inversing Local Model Updates
    Zhao, Bo
    Sun, Peng
    Wang, Tao
    Jiang, Keyu
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 9171 - 9179
  • [40] BFLMeta: Blockchain-Empowered Metaverse with Byzantine-Robust Federated Learning
    Vu Tuan Truong
    Hoang, Duc N. M.
    Long Bao Le
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 5537 - 5542