FedSlice: Protecting Federated Learning Models from Malicious Participants with Model Slicing

被引:2
|
作者
Zhang, Ziqi [1 ]
Li, Yuanchun [2 ]
Liu, Bingyan [3 ]
Cai, Yifeng [1 ]
Li, Ding [1 ]
Guo, Yao [1 ]
Chen, Xiangqun [1 ]
机构
[1] Peking Univ, Key Lab High Confidence Software Technol MOE, Sch Comp Sci, Beijing, Peoples R China
[2] Tsinghua Univ, Inst AI Ind Res AIR, Beijing, Peoples R China
[3] Beijing Univ Posts & Telecommun, Sch Comp Sci, Beijing, Peoples R China
来源
2023 IEEE/ACM 45TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING, ICSE | 2023年
基金
中国国家自然科学基金;
关键词
D O I
10.1109/ICSE48619.2023.00049
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Crowdsourcing Federated learning (CFL) is a new crowdsourcing development paradigm for the Deep Neural Network (DNN) models, also called "software 2.0". In practice, the privacy of CFL can be compromised by many attacks, such as free-rider attacks, adversarial attacks, gradient leakage attacks, and inference attacks. Conventional defensive techniques have low efficiency because they deploy heavy encryption techniques or rely on Trusted Execution Environments (TEEs). To improve the efficiency of protecting CFL from these attacks, this paper proposes FedSlice to prevent malicious participants from getting the whole server-side model while keeping the performance goal of CFL. FedSlice breaks the server-side model into several slices and delivers one slice to each participant. Thus, a malicious participant can only get a subset of the server-side model, preventing them from effectively conducting effective attacks. We evaluate FedSlice against these attacks, and results show that FedSlice provides effective defense: the server-side model leakage is reduced from 100% to 43.45%, the success rate of adversarial attacks is reduced from 100% to 11.66%, the average accuracy of membership inference is reduced from 71.91% to 51.58%, and the data leakage from shared gradients is reduced to the level of random guesses. Besides, FedSlice only introduces less than 2% accuracy loss and about 14% computation overhead. To the best of our knowledge, this is the first paper to discuss defense methods against these attacks to the CFL framework.
引用
收藏
页码:460 / 472
页数:13
相关论文
共 50 条
  • [1] FLUK: Protecting Federated Learning Against Malicious Clients for Internet of Vehicles
    Zhu, Mengde
    Ning, Wanyi
    Qi, Qi
    Wang, Jingyu
    Zhuang, Zirui
    Sun, Haifeng
    Huang, Jun
    Liao, Jianxin
    EURO-PAR 2024: PARALLEL PROCESSING, PART II, EURO-PAR 2024, 2024, 14802 : 454 - 469
  • [2] Detecting Malicious Model Updates from Federated Learning on Conditional Variational Autoencoder
    Gu, Zhipin
    Yang, Yuexiang
    2021 IEEE 35TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM (IPDPS), 2021, : 671 - 680
  • [3] PromptFL Let Federated Participants Cooperatively Learn Prompts Instead of Models - Federated Learning in Age of Foundation Model
    Guo, Tao
    Guo, Song
    Wang, Junxiao
    Tang, Xueyang
    Xu, Wenchao
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (05) : 5179 - 5194
  • [4] Federated learning secure model: A framework for malicious clients detection
    Kolasa, Dominik
    Pilch, Kinga
    Mazurczyk, Wojciech
    SOFTWAREX, 2024, 27
  • [5] Malicious Models-based Federated Learning in Fog Computing Networks
    Huang, Xiaoge
    Ren, Yang
    He, Yong
    Chen, Qianbin
    2022 14TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING, WCSP, 2022, : 192 - 196
  • [6] Malicious Model Detection for Federated Learning Empowered Energy Storage Systems
    Wang, Xu
    Chen, Yuanzhu
    Dobre, Octavia A.
    2023 INTERNATIONAL CONFERENCE ON COMPUTING, NETWORKING AND COMMUNICATIONS, ICNC, 2023, : 501 - 505
  • [7] FedCAM - Identifying Malicious Models in Federated Learning Environments Conditionally to Their Activation Maps
    Bellafqira, Reda
    Coatrieux, Gouenou
    Lansari, Mohammed
    Chala, Jilo
    2024 19TH WIRELESS ON-DEMAND NETWORK SYSTEMS AND SERVICES CONFERENCE, WONS, 2024, : 49 - 56
  • [8] Evolutionary Multi-model Federated Learning on Malicious and Heterogeneous Data
    Shang, Chikai
    Gu, Fangqing
    Jiang, Jiaqi
    2023 23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW 2023, 2023, : 386 - 395
  • [9] Blockchain-enabled Defense Mechanism for Protecting Federated Learning Systems against Malicious Node Updates
    Attiaoui, Adil
    Kobbane, Abdellatif
    Elhachmi, Jamal
    Ayaida, Marwane
    Chougdali, Khalid
    4TH INTERDISCIPLINARY CONFERENCE ON ELECTRICS AND COMPUTER, INTCEC 2024, 2024,
  • [10] A Clustering-Based Scoring Mechanism for Malicious Model Detection in Federated Learning
    Caglayan, Cem
    Yurdakul, Arda
    2022 25TH EUROMICRO CONFERENCE ON DIGITAL SYSTEM DESIGN (DSD), 2022, : 224 - 231