FLTracer: Accurate Poisoning Attack Provenance in Federated Learning

被引:2
|
作者
Zhang, Xinyu [1 ]
Liu, Qingyu [1 ]
Ba, Zhongjie [1 ]
Hong, Yuan [2 ]
Zheng, Tianhang [1 ]
Lin, Feng [1 ]
Lu, Li [1 ]
Ren, Kui [1 ]
机构
[1] Zhejiang Univ, Coll Comp Sci & Technol, State Key Lab Blockchain & Data Secur, Sch Cyber Sci & Technol, Hangzhou 310007, Zhejiang, Peoples R China
[2] Univ Connecticut, Sch Comp, Stamford, CT 06901 USA
基金
美国国家科学基金会; 中国国家自然科学基金;
关键词
Federated learning (FL); poisoning attacks; untargeted attacks; backdoor attacks; attack provenance; attack tracing; anomaly detection;
D O I
10.1109/TIFS.2024.3410014
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Federated Learning (FL) is a promising distributed learning approach that enables multiple clients to collaboratively train a shared global model. However, recent studies show that FL is vulnerable to various poisoning attacks, which can degrade the performance of global models or introduce backdoors into them. In this paper, we first conduct a comprehensive study on prior FL attacks and detection methods. The results show that all existing detection methods are only effective against limited and specific attacks. Most detection methods suffer from high false positives, which lead to significant performance degradation, especially in not independent and identically distributed (non-IID) settings. To address these issues, we propose FLTracer, the first FL attack provenance framework to accurately detect various attacks and trace the attack time, objective, type, and poisoned location of updates. Different from existing methodologies that rely solely on cross-client anomaly detection, we propose a Kalman filter-based cross-round detection to identify adversaries by seeking the behavior changes before and after the attack. Thus, this makes it resilient to data heterogeneity and is effective even in non-IID settings. To further improve the accuracy of our detection method, we employ four novel features and capture their anomalies with the joint decisions. Extensive evaluations show that FLTracer achieves an average true positive rate of over 96.88% at an average false positive rate of less than 2.67%, significantly outperforming SOTA detection methods (https://github.com/Eyr3/FLTracer).
引用
收藏
页码:9534 / 9549
页数:16
相关论文
共 50 条
  • [31] Minimal data poisoning attack in federated learning for medical image classification: An attacker perspective
    Kumar, K. Naveen
    Mohan, C. Krishna
    Cenkeramaddi, Linga Reddy
    Awasthi, Navchetan
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2025, 159
  • [32] Defending Against Data Poisoning Attack in Federated Learning With Non-IID Data
    Yin, Chunyong
    Zeng, Qingkui
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024, 11 (02) : 2313 - 2325
  • [33] Bayesian Optimisation-driven Poisoning Attack against Personalised Federated Learning in Metaverse
    Aristodemou, Marios
    Liu, Xiaolan
    Lambotharan, Sangarapillai
    2024 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS, ICC WORKSHOPS 2024, 2024, : 1980 - 1985
  • [34] Defending against model poisoning attack in federated learning: A variance-minimization approach
    Xu, Hairuo
    Shu, Tao
    JOURNAL OF INFORMATION SECURITY AND APPLICATIONS, 2024, 82
  • [35] DPAD: Data Poisoning Attack Defense Mechanism for federated learning-based system
    Basak, Santanu
    Chatterjee, Kakali
    COMPUTERS & ELECTRICAL ENGINEERING, 2025, 121
  • [36] Federated Anomaly Analytics for Local Model Poisoning Attack
    Shi, Siping
    Hu, Chuang
    Wang, Dan
    Zhu, Yifei
    Han, Zhu
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2022, 40 (02) : 596 - 610
  • [37] Federated Regularization Learning: an Accurate and Safe Method for Federated Learning
    Su, Tianqi
    Wang, Meiqi
    Wang, Zhongfeng
    2021 IEEE 3RD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS), 2021,
  • [38] BPFL: Blockchain-based privacy-preserving federated learning against poisoning attack
    Ren, Yanli
    Hu, Mingqi
    Yang, Zhe
    Feng, Guorui
    Zhang, Xinpeng
    INFORMATION SCIENCES, 2024, 665
  • [39] Bandit-based data poisoning attack against federated learning for autonomous driving models
    Wang, Shuo
    Li, Qianmu
    Cui, Zhiyong
    Hou, Jun
    Huang, Chanying
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 227
  • [40] Federated Learning Under Attack: Exposing Vulnerabilities Through Data Poisoning Attacks in Computer Networks
    Nowroozi, Ehsan
    Haider, Imran
    Taheri, Rahim
    Conti, Mauro
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2025, 22 (01): : 822 - 831