FLTracer: Accurate Poisoning Attack Provenance in Federated Learning

被引:2
|
作者
Zhang, Xinyu [1 ]
Liu, Qingyu [1 ]
Ba, Zhongjie [1 ]
Hong, Yuan [2 ]
Zheng, Tianhang [1 ]
Lin, Feng [1 ]
Lu, Li [1 ]
Ren, Kui [1 ]
机构
[1] Zhejiang Univ, Coll Comp Sci & Technol, State Key Lab Blockchain & Data Secur, Sch Cyber Sci & Technol, Hangzhou 310007, Zhejiang, Peoples R China
[2] Univ Connecticut, Sch Comp, Stamford, CT 06901 USA
基金
美国国家科学基金会; 中国国家自然科学基金;
关键词
Federated learning (FL); poisoning attacks; untargeted attacks; backdoor attacks; attack provenance; attack tracing; anomaly detection;
D O I
10.1109/TIFS.2024.3410014
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Federated Learning (FL) is a promising distributed learning approach that enables multiple clients to collaboratively train a shared global model. However, recent studies show that FL is vulnerable to various poisoning attacks, which can degrade the performance of global models or introduce backdoors into them. In this paper, we first conduct a comprehensive study on prior FL attacks and detection methods. The results show that all existing detection methods are only effective against limited and specific attacks. Most detection methods suffer from high false positives, which lead to significant performance degradation, especially in not independent and identically distributed (non-IID) settings. To address these issues, we propose FLTracer, the first FL attack provenance framework to accurately detect various attacks and trace the attack time, objective, type, and poisoned location of updates. Different from existing methodologies that rely solely on cross-client anomaly detection, we propose a Kalman filter-based cross-round detection to identify adversaries by seeking the behavior changes before and after the attack. Thus, this makes it resilient to data heterogeneity and is effective even in non-IID settings. To further improve the accuracy of our detection method, we employ four novel features and capture their anomalies with the joint decisions. Extensive evaluations show that FLTracer achieves an average true positive rate of over 96.88% at an average false positive rate of less than 2.67%, significantly outperforming SOTA detection methods (https://github.com/Eyr3/FLTracer).
引用
收藏
页码:9534 / 9549
页数:16
相关论文
共 50 条
  • [21] Logits Poisoning Attack in Federated Distillation
    Tang, Yuhan
    Wu, Zhiyuan
    Gao, Bo
    Wen, Tian
    Wang, Yuwei
    Sun, Sheng
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT III, KSEM 2024, 2024, 14886 : 286 - 298
  • [22] A Novel Data Poisoning Attack in Federated Learning based on Inverted Loss Function
    Gupta, Prajjwal
    Yadav, Krishna
    Gupta, Brij B.
    Alazab, Mamoun
    Gadekallu, Thippa Reddy
    COMPUTERS & SECURITY, 2023, 130
  • [23] Securing federated learning: a defense strategy against targeted data poisoning attack
    Ansam Khraisat
    Ammar Alazab
    Moutaz Alazab
    Tony Jan
    Sarabjot Singh
    Md. Ashraf Uddin
    Discover Internet of Things, 5 (1):
  • [24] A Meta-Reinforcement Learning-Based Poisoning Attack Framework Against Federated Learning
    Zhou, Wei
    Zhang, Donglai
    Wang, Hongjie
    Li, Jinliang
    Jiang, Mingjian
    IEEE ACCESS, 2025, 13 : 28628 - 28644
  • [25] FedRecAttack: Model Poisoning Attack to Federated Recommendation
    Rong, Dazhong
    Ye, Shuai
    Zhao, Ruoyan
    Yuen, Hon Ning
    Chen, Jianhai
    He, Qinming
    2022 IEEE 38TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2022), 2022, : 2643 - 2655
  • [26] Cross the Chasm: Scalable Privacy-Preserving Federated Learning against Poisoning Attack
    Li, Yiran
    Hu, Guiqiang
    Liu, Xiaoyuan
    Ying, Zuobin
    2021 18TH INTERNATIONAL CONFERENCE ON PRIVACY, SECURITY AND TRUST (PST), 2021,
  • [27] An Analysis of Untargeted Poisoning Attack and Defense Methods for Federated Online Learning to Rank Systems
    Wang, Shuyi
    Zuccon, Guido
    PROCEEDINGS OF THE 2023 ACM SIGIR INTERNATIONAL CONFERENCE ON THE THEORY OF INFORMATION RETRIEVAL, ICTIR 2023, 2023, : 215 - 224
  • [28] FedTop: a constraint-loosed federated learning aggregation method against poisoning attack
    Wang, Che
    Wu, Zhenhao
    Gao, Jianbo
    Zhang, Jiashuo
    Xia, Junjie
    Gao, Feng
    Guan, Zhi
    Chen, Zhong
    FRONTIERS OF COMPUTER SCIENCE, 2024, 18 (05)
  • [29] VagueGAN: A GAN-Based Data Poisoning Attack Against Federated Learning Systems
    Sun, Wei
    Gao, Bo
    Xiong, Ke
    Lu, Yang
    Wang, Yuwei
    2023 20TH ANNUAL IEEE INTERNATIONAL CONFERENCE ON SENSING, COMMUNICATION, AND NETWORKING, SECON, 2023,
  • [30] Personalized federated learning-based intrusion detection system: Poisoning attack and defense
    Thein, Thin Tharaphe
    Shiraishi, Yoshiaki
    Morii, Masakatu
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 153 : 182 - 192