SEAR: Secure and Efficient Aggregation for Byzantine-Robust Federated Learning

被引:42
|
作者
Zhao, Lingchen [1 ]
Jiang, Jianlin [2 ]
Feng, Bo [3 ]
Wang, Qian [1 ]
Shen, Chao [4 ]
Li, Qi [5 ,6 ]
机构
[1] Wuhan Univ, Sch Cyber Sci & Engn, Key Lab Aerosp Informat Secur & Trusted Comp, Minist Educ, Wuhan 430072, Hubei, Peoples R China
[2] Wuhan Univ, Sch Comp Sci, Wuhan 430072, Hubei, Peoples R China
[3] Northeastern Univ, Khoury Coll Comp Sci, Boston, MA 02115 USA
[4] Xi An Jiao Tong Univ, Sch Cyber Sci & Engn, MOE Key Lab Intelligent Networks & Network Secur, Xian 710049, Shaanxi, Peoples R China
[5] Tsinghua Univ, Inst Network Sci & Cyberspace, Beijing 100084, Peoples R China
[6] Tsinghua Univ, Beijing Natl Res Ctr Informat Sci & Technol BNRis, Beijing 100084, Peoples R China
基金
国家重点研发计划;
关键词
Servers; Computational modeling; Collaborative work; Data models; Privacy; Cryptography; Training data; Federated learning; secure aggregation; trusted execution environment;
D O I
10.1109/TDSC.2021.3093711
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning facilitates the collaborative training of a global model among distributed clients without sharing their training data. Secure aggregation, a new security primitive for federated learning, aims to preserve the confidentiality of both local models and training data. Unfortunately, existing secure aggregation solutions fail to defend against Byzantine failures that are common in distributed computing systems. In this work, we propose a new secure and efficient aggregation framework, SEAR, for Byzantine-robust federated learning. Relying on the trusted execution environment, i.e., Intel SGX, SEAR protects clients' private models while enabling Byzantine resilience. Considering the limitation of the current Intel SGX's architecture (i.e., the limited trusted memory), we propose two data storage modes to efficiently implement aggregation algorithms efficiently in SGX. Moreover, to balance the efficiency and performance of aggregation, we propose a sampling-based method to efficiently detect Byzantine failures without degrading the global model's performance. We implement and evaluate SEAR in a LAN environment, and the experiment results show that SEAR is computationally efficient and robust to Byzantine adversaries. Compared to the previous practical secure aggregation framework, SEAR improves aggregation efficiency by 4-6 times while supporting Byzantine resilience at the same time.
引用
收藏
页码:3329 / 3342
页数:14
相关论文
共 50 条
  • [31] Byzantine-robust federated learning over Non-IID data
    Ma, Xindi
    Li, Qinghua
    Jiang, Qi
    Ma, Zhuo
    Gao, Sheng
    Tian, Youliang
    Ma, Jianfeng
    [J]. Tongxin Xuebao/Journal on Communications, 2023, 44 (06): : 138 - 153
  • [32] Byzantine-robust Federated Learning through Collaborative Malicious Gradient Filtering
    Xu, Jian
    Huang, Shao-Lun
    Song, Linqi
    Lan, Tian
    [J]. 2022 IEEE 42ND INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2022), 2022, : 1223 - 1235
  • [33] Distance-Statistical based Byzantine-robust algorithms in Federated Learning
    Colosimo, Francesco
    De Rango, Floriano
    [J]. 2024 IEEE 21ST CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE, CCNC, 2024, : 1034 - 1035
  • [34] BFLMeta: Blockchain-Empowered Metaverse with Byzantine-Robust Federated Learning
    Vu Tuan Truong
    Hoang, Duc N. M.
    Long Bao Le
    [J]. IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 5537 - 5542
  • [35] Communication-Efficient and Byzantine-Robust Distributed Learning
    Ghosh, Avishek
    Maity, Raj Kumar
    Kadhe, Swanand
    Mazumdar, Arya
    Ramchandran, Kannan
    [J]. 2020 INFORMATION THEORY AND APPLICATIONS WORKSHOP (ITA), 2020,
  • [36] FedInv: Byzantine-Robust Federated Learning by Inversing Local Model Updates
    Zhao, Bo
    Sun, Peng
    Wang, Tao
    Jiang, Keyu
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 9171 - 9179
  • [37] FLGuard: Byzantine-Robust Federated Learning via Ensemble of Contrastive Models
    Lee, Younghan
    Cho, Yungi
    Han, Woorim
    Bae, Ho
    Paek, Yunheung
    [J]. COMPUTER SECURITY - ESORICS 2023, PT IV, 2024, 14347 : 65 - 84
  • [38] Defense against local model poisoning attacks to byzantine-robust federated learning
    Shiwei Lu
    Ruihu Li
    Xuan Chen
    Yuena Ma
    [J]. Frontiers of Computer Science, 2022, 16
  • [39] Byzantine-Robust Decentralized Learning via Remove-then-Clip Aggregation
    Yang, Caiyi
    Ghaderi, Javad
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 19, 2024, : 21735 - 21743
  • [40] Byzantine-Robust Federated Learning via Server-Side Mixtue of Experts
    [J]. Li, Jing (lj@ustc.edu.cn), 1600, Springer Science and Business Media Deutschland GmbH (14326 LNAI):