FLTracer: Accurate Poisoning Attack Provenance in Federated Learning

被引:2
|
作者
Zhang, Xinyu [1 ]
Liu, Qingyu [1 ]
Ba, Zhongjie [1 ]
Hong, Yuan [2 ]
Zheng, Tianhang [1 ]
Lin, Feng [1 ]
Lu, Li [1 ]
Ren, Kui [1 ]
机构
[1] Zhejiang Univ, Coll Comp Sci & Technol, State Key Lab Blockchain & Data Secur, Sch Cyber Sci & Technol, Hangzhou 310007, Zhejiang, Peoples R China
[2] Univ Connecticut, Sch Comp, Stamford, CT 06901 USA
基金
美国国家科学基金会; 中国国家自然科学基金;
关键词
Federated learning (FL); poisoning attacks; untargeted attacks; backdoor attacks; attack provenance; attack tracing; anomaly detection;
D O I
10.1109/TIFS.2024.3410014
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Federated Learning (FL) is a promising distributed learning approach that enables multiple clients to collaboratively train a shared global model. However, recent studies show that FL is vulnerable to various poisoning attacks, which can degrade the performance of global models or introduce backdoors into them. In this paper, we first conduct a comprehensive study on prior FL attacks and detection methods. The results show that all existing detection methods are only effective against limited and specific attacks. Most detection methods suffer from high false positives, which lead to significant performance degradation, especially in not independent and identically distributed (non-IID) settings. To address these issues, we propose FLTracer, the first FL attack provenance framework to accurately detect various attacks and trace the attack time, objective, type, and poisoned location of updates. Different from existing methodologies that rely solely on cross-client anomaly detection, we propose a Kalman filter-based cross-round detection to identify adversaries by seeking the behavior changes before and after the attack. Thus, this makes it resilient to data heterogeneity and is effective even in non-IID settings. To further improve the accuracy of our detection method, we employ four novel features and capture their anomalies with the joint decisions. Extensive evaluations show that FLTracer achieves an average true positive rate of over 96.88% at an average false positive rate of less than 2.67%, significantly outperforming SOTA detection methods (https://github.com/Eyr3/FLTracer).
引用
收藏
页码:9534 / 9549
页数:16
相关论文
共 50 条
  • [41] Semi-Targeted Model Poisoning Attack on Federated Learning via Backward Error Analysis
    Sun, Yuwei
    Ochiai, Hideya
    Sakuma, Jun
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [42] Attacking-Distance-Aware Attack: Semi-targeted Model Poisoning on Federated Learning
    Sun Y.
    Ochiai H.
    Sakuma J.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (02): : 925 - 939
  • [43] RFed: Robustness-Enhanced Privacy-Preserving Federated Learning Against Poisoning Attack
    Miao, Yinbin
    Yan, Xinru
    Li, Xinghua
    Xu, Shujiang
    Liu, Ximeng
    Li, Hongwei
    Deng, Robert H.
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 5814 - 5827
  • [44] BPFL: Blockchain-based privacy-preserving federated learning against poisoning attack
    Ren, Yanli
    Hu, Mingqi
    Yang, Zhe
    Feng, Guorui
    Zhang, Xinpeng
    Information Sciences, 2024, 665
  • [45] Federated Learning-Based Intrusion Detection in the Context of IIoT Networks: Poisoning Attack and Defense
    Nguyen Chi Vy
    Nguyen Huu Quyen
    Phan The Duy
    Van-Hau Pham
    NETWORK AND SYSTEM SECURITY, NSS 2021, 2021, 13041 : 131 - 147
  • [46] FedIMP: Parameter Importance-based Model Poisoning attack against Federated learning system
    Li, Xuan
    Wang, Naiyu
    Yuan, Shuai
    Guan, Zhitao
    COMPUTERS & SECURITY, 2024, 144
  • [47] Poisoning Attacks in Federated Learning: A Survey
    Xia, Geming
    Chen, Jian
    Yu, Chaodong
    Ma, Jun
    IEEE ACCESS, 2023, 11 : 10708 - 10722
  • [48] Beyond data poisoning in federated learning
    Kasyap, Harsh
    Tripathy, Somanath
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 235
  • [49] Perception Poisoning Attacks in Federated Learning
    Chow, Ka-Ho
    Liu, Ling
    2021 THIRD IEEE INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS AND APPLICATIONS (TPS-ISA 2021), 2021, : 146 - 155
  • [50] Data Poisoning Detection in Federated Learning
    Khuu, Denise-Phi
    Sober, Michael
    Kaaser, Dominik
    Fischer, Mathias
    Schulte, Stefan
    39TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2024, 2024, : 1549 - 1558