FLShield: A Validation Based Federated Learning Framework to Defend Against Poisoning Attacks

被引:0
|
作者
Kabir, Ehsanul [1 ]
Song, Zeyu [1 ]
Rashid, Md Rafi Ur [1 ]
Mehnaz, Shagufta [1 ]
机构
[1] Penn State Univ, University Pk, PA 16802 USA
关键词
D O I
10.1109/SP54263.2024.00141
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) is revolutionizing how we learn from data. With its growing popularity, it is now being used in many safety-critical domains such as autonomous vehicles and healthcare. Since thousands of participants can contribute in this collaborative setting, it is, however, challenging to ensure security and reliability of such systems. This highlights the need to design FL systems that are secure and robust against malicious participants' actions while also ensuring high utility, privacy of local data, and efficiency. In this paper, we propose a novel FL framework dubbed as FLShield that utilizes benign data from FL participants to validate the local models before taking them into account for generating the global model. This is in stark contrast with existing defenses relying on server's access to clean datasets-an assumption often impractical in real-life scenarios and conflicting with the fundamentals of FL. We conduct extensive experiments to evaluate our FLShield framework in different settings and demonstrate its effectiveness in thwarting various types of poisoning and backdoor attacks including a defense-aware one. FLShield also preserves privacy of local data against gradient inversion attacks.
引用
收藏
页码:2572 / 2590
页数:19
相关论文
共 50 条
  • [41] SPFL: A Self-Purified Federated Learning Method Against Poisoning Attacks
    Liu, Zizhen
    He, Weiyang
    Chang, Chip-Hong
    Ye, Jing
    Li, Huawei
    Li, Xiaowei
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 6604 - 6619
  • [42] RobustFL: Robust Federated Learning Against Poisoning Attacks in Industrial IoT Systems
    Zhang, Jiale
    Ge, Chunpeng
    Hu, Feng
    Chen, Bing
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (09) : 6388 - 6397
  • [43] PoisonGAN: Generative Poisoning Attacks Against Federated Learning in Edge Computing Systems
    Zhang, Jiale
    Chen, Bing
    Cheng, Xiang
    Huynh Thi Thanh Binh
    Yu, Shui
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (05) : 3310 - 3322
  • [44] A Robust and Efficient Federated Learning Algorithm Against Adaptive Model Poisoning Attacks
    Yang, Han
    Gu, Dongbing
    He, Jianhua
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (09): : 16289 - 16302
  • [45] Resilience of Wireless Ad Hoc Federated Learning against Model Poisoning Attacks
    Tezuka, Naoya
    Ochiai, Hideya
    Sun, Yuwei
    Esaki, Hiroshi
    2022 IEEE 4TH INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS, AND APPLICATIONS, TPS-ISA, 2022, : 168 - 177
  • [46] DefendFL: A Privacy-Preserving Federated Learning Scheme Against Poisoning Attacks
    Liu, Jiao
    Li, Xinghua
    Liu, Ximeng
    Zhang, Haiyan
    Miao, Yinbin
    Deng, Robert H.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [47] Blockfd: blockchain-based federated distillation against poisoning attacks
    Li Y.
    Zhang J.
    Zhu J.
    Li W.
    Neural Computing and Applications, 2024, 36 (21) : 12901 - 12916
  • [48] A Differentially Private Federated Learning Model Against Poisoning Attacks in Edge Computing
    Zhou, Jun
    Wu, Nan
    Wang, Yisong
    Gu, Shouzhen
    Cao, Zhenfu
    Dong, Xiaolei
    Choo, Kim-Kwang Raymond
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (03) : 1941 - 1958
  • [49] Data Poisoning Attacks on Federated Machine Learning
    Sun, Gan
    Cong, Yang
    Dong, Jiahua
    Wang, Qiang
    Lyu, Lingjuan
    Liu, Ji
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (13) : 11365 - 11375
  • [50] Fair Detection of Poisoning Attacks in Federated Learning
    Singh, Ashneet Khandpur
    Blanco-Justicia, Alberto
    Domingo-Ferrer, Josep
    Sanchez, David
    Rebollo-Monedero, David
    2020 IEEE 32ND INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI), 2020, : 224 - 229