DPAD: Data Poisoning Attack Defense Mechanism for federated learning-based system

被引:0
|
作者
Basak, Santanu [1 ]
Chatterjee, Kakali [1 ]
机构
[1] Natl Inst Technol Patna, Dept Comp Sci & Engn, Patna 800005, Bihar, India
关键词
Data Poisoning Attack; Data Poisoning Attack Defense; Federated learning; Machine learning; Machine learning attack; Secure aggregation process;
D O I
10.1016/j.compeleceng.2024.109893
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The Federated Learning (FL)-based approaches are increasing rapidly for different areas, such as home automation, smart healthcare, smart cars, etc. In FL, multiple users participate collaboratively and distributively to construct a global model without sharing raw data. The FL- based system resolves several issues of central server-based machine learning approaches, such as data availability, maintaining user privacy, etc. Still, some issues exist, such as data poisoning attacks and re-identification attacks. This paper proposes a Data Poisoning Attack Defense (DPAD) Mechanism that detects and defends against the data poisoning attack efficiently and secures the aggregation process for the Federated Learning-based systems. The DPAD verifies each client's updates using an audit mechanism that decides whether a local update is considered for aggregation. The experimental results show the effectiveness of the attack and the power of the DPAD mechanism compared with the state-of-the-art methods.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] FADO: A Federated Learning Attack and Defense Orchestrator
    Rodrigues, Filipe
    Simoes, Rodrigo
    Neves, Nuno
    2023 53RD ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS WORKSHOPS, DSN-W, 2023, : 141 - 148
  • [32] Model poisoning attack in differential privacy-based federated learning
    Yang, Ming
    Cheng, Hang
    Chen, Fei
    Liu, Ximeng
    Wang, Meiqing
    Li, Xibin
    INFORMATION SCIENCES, 2023, 630 : 158 - 172
  • [33] CCF Based System Framework In Federated Learning Against Data Poisoning Attacks
    Ahmed, Ibrahim M.
    Kashmoola, Manar Younis
    JOURNAL OF APPLIED SCIENCE AND ENGINEERING, 2023, 26 (07): : 973 - 981
  • [34] Blockchain-Based Gradient Inversion and Poisoning Defense for Federated Learning
    Wang, Minghao
    Zhu, Tianqing
    Zuo, Xuhan
    Ye, Dayong
    Yu, Shui
    Zhou, Wanlei
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (09): : 15667 - 15681
  • [35] Membership inference attack and defense method in federated learning based on GAN
    Zhang J.
    Zhu C.
    Sun X.
    Chen B.
    Tongxin Xuebao/Journal on Communications, 2023, 44 (05): : 193 - 205
  • [36] Backdoor Attack Defense Method for Federated Learning Based on Model Watermarking
    Guo J.-J.
    Liu J.-Z.
    Ma Y.
    Liu Z.-Q.
    Xiong Y.-P.
    Miao K.
    Li J.-X.
    Ma J.-F.
    Jisuanji Xuebao/Chinese Journal of Computers, 2024, 47 (03): : 662 - 676
  • [37] Collusive Model Poisoning Attack in Decentralized Federated Learning
    Tan, Shouhong
    Hao, Fengrui
    Gu, Tianlong
    Li, Long
    Liu, Ming
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (04) : 5989 - 5999
  • [38] FLTracer: Accurate Poisoning Attack Provenance in Federated Learning
    Zhang, Xinyu
    Liu, Qingyu
    Ba, Zhongjie
    Hong, Yuan
    Zheng, Tianhang
    Lin, Feng
    Lu, Li
    Ren, Kui
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 9534 - 9549
  • [39] Minimal data poisoning attack in federated learning for medical image classification: An attacker perspective
    Kumar, K. Naveen
    Mohan, C. Krishna
    Cenkeramaddi, Linga Reddy
    Awasthi, Navchetan
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2025, 159
  • [40] Beyond data poisoning in federated learning
    Kasyap, Harsh
    Tripathy, Somanath
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 235