FLTracer: Accurate Poisoning Attack Provenance in Federated Learning

被引:2
|
作者
Zhang, Xinyu [1 ]
Liu, Qingyu [1 ]
Ba, Zhongjie [1 ]
Hong, Yuan [2 ]
Zheng, Tianhang [1 ]
Lin, Feng [1 ]
Lu, Li [1 ]
Ren, Kui [1 ]
机构
[1] Zhejiang Univ, Coll Comp Sci & Technol, State Key Lab Blockchain & Data Secur, Sch Cyber Sci & Technol, Hangzhou 310007, Zhejiang, Peoples R China
[2] Univ Connecticut, Sch Comp, Stamford, CT 06901 USA
基金
美国国家科学基金会; 中国国家自然科学基金;
关键词
Federated learning (FL); poisoning attacks; untargeted attacks; backdoor attacks; attack provenance; attack tracing; anomaly detection;
D O I
10.1109/TIFS.2024.3410014
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Federated Learning (FL) is a promising distributed learning approach that enables multiple clients to collaboratively train a shared global model. However, recent studies show that FL is vulnerable to various poisoning attacks, which can degrade the performance of global models or introduce backdoors into them. In this paper, we first conduct a comprehensive study on prior FL attacks and detection methods. The results show that all existing detection methods are only effective against limited and specific attacks. Most detection methods suffer from high false positives, which lead to significant performance degradation, especially in not independent and identically distributed (non-IID) settings. To address these issues, we propose FLTracer, the first FL attack provenance framework to accurately detect various attacks and trace the attack time, objective, type, and poisoned location of updates. Different from existing methodologies that rely solely on cross-client anomaly detection, we propose a Kalman filter-based cross-round detection to identify adversaries by seeking the behavior changes before and after the attack. Thus, this makes it resilient to data heterogeneity and is effective even in non-IID settings. To further improve the accuracy of our detection method, we employ four novel features and capture their anomalies with the joint decisions. Extensive evaluations show that FLTracer achieves an average true positive rate of over 96.88% at an average false positive rate of less than 2.67%, significantly outperforming SOTA detection methods (https://github.com/Eyr3/FLTracer).
引用
收藏
页码:9534 / 9549
页数:16
相关论文
共 50 条
  • [1] Mitigating Poisoning Attack in Federated Learning
    Uprety, Aashma
    Rawat, Danda B.
    2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021), 2021,
  • [2] Deep Model Poisoning Attack on Federated Learning
    Zhou, Xingchen
    Xu, Ming
    Wu, Yiming
    Zheng, Ning
    FUTURE INTERNET, 2021, 13 (03)
  • [3] Understanding Distributed Poisoning Attack in Federated Learning
    Cao, Di
    Chang, Shan
    Lin, Zhijian
    Liu, Guohua
    Sunt, Donghong
    2019 IEEE 25TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS), 2019, : 233 - 239
  • [4] Collusive Model Poisoning Attack in Decentralized Federated Learning
    Tan, Shouhong
    Hao, Fengrui
    Gu, Tianlong
    Li, Long
    Liu, Ming
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (04) : 5989 - 5999
  • [5] Mitigate Data Poisoning Attack by Partially Federated Learning
    Dam, Khanh Huu The
    Legay, Axel
    18TH INTERNATIONAL CONFERENCE ON AVAILABILITY, RELIABILITY & SECURITY, ARES 2023, 2023,
  • [6] Poisoning Attack in Federated Learning using Generative Adversarial Nets
    Zhang, Jiale
    Chen, Junjun
    Wu, Di
    Chen, Bing
    Yu, Shui
    2019 18TH IEEE INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS/13TH IEEE INTERNATIONAL CONFERENCE ON BIG DATA SCIENCE AND ENGINEERING (TRUSTCOM/BIGDATASE 2019), 2019, : 374 - 380
  • [7] ADFL: A Poisoning Attack Defense Framework for Horizontal Federated Learning
    Guo, Jingjing
    Li, Haiyang
    Huang, Feiran
    Liu, Zhiquan
    Peng, Yanguo
    Li, Xinghua
    Ma, Jianfeng
    Menon, Varun G.
    Igorevich, Konstantin Kostromitin
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (10) : 6526 - 6536
  • [8] FLAIR: Defense against Model Poisoning Attack in Federated Learning
    Sharma, Atul
    Chen, Wei
    Zhao, Joshua
    Qiu, Qiang
    Bagchi, Saurabh
    Chaterji, Somali
    PROCEEDINGS OF THE 2023 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, ASIA CCS 2023, 2023, : 553 - +
  • [9] LoMar: A Local Defense Against Poisoning Attack on Federated Learning
    Li, Xingyu
    Qu, Zhe
    Zhao, Shangqing
    Tang, Bo
    Lu, Zhuo
    Liu, Yao
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (01) : 437 - 450
  • [10] FEDGUARD: Selective Parameter Aggregation for Poisoning Attack Mitigation in Federated Learning
    Chelli, Melvin
    Prigent, Cedric
    Schubotz, Rene
    Costan, Alexandru
    Antoniu, Gabriel
    Cudennec, Loic
    Slusallek, Philipp
    2023 IEEE INTERNATIONAL CONFERENCE ON CLUSTER COMPUTING, CLUSTER, 2023, : 72 - 81