FedSuper: A Byzantine-Robust Federated Learning Under Supervision

被引:0
|
作者
Zhao, Ping [1 ,2 ]
Jiang, Jin [1 ,2 ]
Zhang, Guanglin [1 ,2 ]
机构
[1] Donghua Univ, Coll Informat Sci & Technol, Shanghai, Peoples R China
[2] Donghua Univ, Coll Informat Sci & Technol, 2999 Renmin North Rd, Shanghai 201620, Peoples R China
基金
中国国家自然科学基金;
关键词
Federated learning; Byzantine attack; Byzantine ratio; non-IID; MOBILITY;
D O I
10.1145/3630099
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) is a machine learning setting where multiple worker devices collaboratively train a model under the orchestration of a central server, while keeping the training data local. However, owing to the lack of supervision on worker devices, FL is vulnerable to Byzantine attacks where the worker devices controlled by an adversary arbitrarily generate poisoned local models and send to FL server, ultimately degrading the utility (e.g., model accuracy) of the global model. Most of existing Byzantine-robust algorithms, however, cannot well react to the threatening Byzantine attacks when the ratio of compromised worker devices (i.e., Byzantine ratio) is over 0.5 and worker devices ' local training datasets are not independent and identically distributed (non-IID). We propose a novel Byzantine-robust Federated Learning under Supervision (FedSuper), which can maintain robustness against Byzantine attacks even in the threatening scenario with a very high Byzantine ratio (0.9 in our experiments) and the largest level of non-IID data (1.0 in our experiments) when the state-of-the-art Byzantine attacks are conducted. The main idea of FedSuper is that the FL server supervises worker devices via injecting a shadow dataset into their local training processes. Moreover, according to the local models ' accuracies or losses on the shadow dataset, we design a Local Model Filter to remove poisoned local models and output an optimal global model. Extensive experimental results on three real-world datasets demonstrate the effectiveness and the superior performance of FedSuper, compared to five latest Byzantine-robust FL algorithms and two baselines, in defending
引用
收藏
页数:29
相关论文
共 50 条
  • [21] An Experimental Study of Byzantine-Robust Aggregation Schemes in Federated Learning
    Li, Shenghui
    Ngai, Edith
    Voigt, Thiemo
    IEEE TRANSACTIONS ON BIG DATA, 2024, 10 (06) : 975 - 988
  • [22] Lightweight Byzantine-Robust and Privacy-Preserving Federated Learning
    Lu, Zhi
    Lu, Songfeng
    Cui, Yongquan
    Wu, Junjun
    Nie, Hewang
    Xiao, Jue
    Yi, Zepu
    EURO-PAR 2024: PARALLEL PROCESSING, PART II, EURO-PAR 2024, 2024, 14802 : 274 - 287
  • [23] Byzantine-Robust Federated Learning with Variance Reduction and Differential Privacy
    Zhang, Zikai
    Hu, Rui
    2023 IEEE CONFERENCE ON COMMUNICATIONS AND NETWORK SECURITY, CNS, 2023,
  • [24] SEAR: Secure and Efficient Aggregation for Byzantine-Robust Federated Learning
    Zhao, Lingchen
    Jiang, Jianlin
    Feng, Bo
    Wang, Qian
    Shen, Chao
    Li, Qi
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2022, 19 (05) : 3329 - 3342
  • [25] FLForest: Byzantine-robust Federated Learning through Isolated Forest
    Wang, Tao
    Zhao, Bo
    Fang, Liming
    2022 IEEE 28TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS, ICPADS, 2022, : 296 - 303
  • [26] Byzantine-robust Federated Learning via Cosine Similarity Aggregation
    Zhu, Tengteng
    Guo, Zehua
    Yao, Chao
    Tan, Jiaxin
    Dou, Songshi
    Wang, Wenrun
    Han, Zhenzhen
    COMPUTER NETWORKS, 2024, 254
  • [27] Byzantine-Robust and Communication-Efficient Personalized Federated Learning
    Zhang, Jiaojiao
    He, Xuechao
    Huang, Yue
    Ling, Qing
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2025, 73 : 26 - 39
  • [28] Byzantine-Robust and Privacy-Preserving Federated Learning With Irregular Participants
    Chen, Yinuo
    Tan, Wuzheng
    Zhong, Yijian
    Kang, Yulin
    Yang, Anjia
    Weng, Jian
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (21): : 35193 - 35205
  • [29] BRFL: A blockchain-based byzantine-robust federated learning model
    Li, Yang
    Xia, Chunhe
    Li, Chang
    Wang, Tianbo
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2025, 196
  • [30] Communication-Efficient and Byzantine-Robust Differentially Private Federated Learning
    Li, Min
    Xiao, Di
    Liang, Jia
    Huang, Hui
    IEEE COMMUNICATIONS LETTERS, 2022, 26 (08) : 1725 - 1729