Collusive Model Poisoning Attack in Decentralized Federated Learning

被引:1
|
作者
Tan, Shouhong [1 ]
Hao, Fengrui [1 ]
Gu, Tianlong [1 ]
Li, Long [2 ]
Liu, Ming [3 ]
机构
[1] Jinan Univ, Engn Res Ctr Trustworthy AI, Minist Educ, Guangzhou 510632, Peoples R China
[2] Guilin Univ Elect Technol, Guangxi Key Lab Trusted Software, Guilin 541004, Peoples R China
[3] Guangzhou Res Inst Informat Technol, Guangzhou 510075, Peoples R China
关键词
Collusive attack; decentralized federated learning (DFL); industrial Internet of Things (IIoT); model poisoning; OPPORTUNITIES; CHALLENGES;
D O I
10.1109/TII.2023.3342901
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As a privacy-preserving machine learning paradigm, federated learning (FL) has attracted widespread attention from both academia and industry. Decentralized FL (DFL) overcomes the problems of untrusted aggregation server, single point of failure and poor scalability in traditional FL, making it suitable for industrial Internet of Things (IIoT). However, DFL provides more convenient conditions for malicious participants to launch attacks. This article focuses on the model poisoning attack in DFL for the first time, and proposes a novel attack method called collusive model poisoning attack (CMPA). To implement CMPA, we propose the dynamic adaptive construction mechanism, in which malicious participants can dynamically and adaptively construct malicious local models that meet distance constraints, reducing the convergence speed and accuracy of consensus models. Furthermore, we design the collusion-based attack enhancement strategies, where multiple participants can collude in the process of constructing malicious local models to improve the strength of attack. Empirical experiments conducted on MNIST and CIFAR-10 datasets reveal that CMPA significantly impacts the training process and results of DFL. Attack tests against representative defense methods show that CMPA not only invalidates statistical-based defenses but also skillfully overcomes performance-based methods, further proving its effectiveness and stealthiness. In addition, experiments based on practical IIoT scenario have also shown that CMPA can effectively disrupt system functionality.
引用
收藏
页码:5989 / 5999
页数:11
相关论文
共 50 条
  • [21] Challenges and Countermeasures of Federated Learning Data Poisoning Attack Situation Prediction
    Wu, Jianping
    Jin, Jiahe
    Wu, Chunming
    MATHEMATICS, 2024, 12 (06)
  • [22] Data Poisoning Attack Based on Privacy Reasoning and Countermeasure in Federated Learning
    Lv, Jiguang
    Xu, Shuchun
    Ling, Yi
    Man, Dapeng
    Han, Shuai
    Yang, Wu
    2023 19TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING, MSN 2023, 2023, : 472 - 479
  • [23] Mitigation of a poisoning attack in federated learning by using historical distance detection
    Zhaosen Shi
    Xuyang Ding
    Fagen Li
    Yingni Chen
    Canran Li
    Annals of Telecommunications, 2023, 78 : 135 - 147
  • [24] Poisoning-Assisted Property Inference Attack Against Federated Learning
    Wang, Zhibo
    Huang, Yuting
    Song, Mengkai
    Wu, Libing
    Xue, Feng
    Ren, Kui
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (04) : 3328 - 3340
  • [25] Mitigation of a poisoning attack in federated learning by using historical distance detection
    Shi, Zhaosen
    Ding, Xuyang
    Li, Fagen
    Chen, Yingni
    Li, Canran
    ANNALS OF TELECOMMUNICATIONS, 2023, 78 (3-4) : 135 - 147
  • [26] Efficiently Achieving Privacy Preservation and Poisoning Attack Resistance in Federated Learning
    Li, Xueyang
    Yang, Xue
    Zhou, Zhengchun
    Lu, Rongxing
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 4358 - 4373
  • [27] Untargeted Poisoning Attack Detection in Federated Learning via Behavior AttestationAl
    Mallah, Ranwa Al
    Lopez, David
    Badu-Marfo, Godwin
    Farooq, Bilal
    IEEE ACCESS, 2023, 11 : 125064 - 125079
  • [28] Poisoning with Cerberus: Stealthy and Colluded Backdoor Attack against Federated Learning
    Lyu, Xiaoting
    Han, Yufei
    Wang, Wei
    Liu, Jingkai
    Wang, Bin
    Liu, Jiqiang
    Zhang, Xiangliang
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 7, 2023, : 9020 - 9028
  • [29] Pocket Diagnosis: Secure Federated Learning Against Poisoning Attack in the Cloud
    Ma, Zhuoran
    Ma, Jianfeng
    Miao, Yinbin
    Liu, Ximeng
    Choo, Kim-Kwang Raymond
    Deng, Robert H.
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2022, 15 (06) : 3429 - 3442
  • [30] Logits Poisoning Attack in Federated Distillation
    Tang, Yuhan
    Wu, Zhiyuan
    Gao, Bo
    Wen, Tian
    Wang, Yuwei
    Sun, Sheng
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT III, KSEM 2024, 2024, 14886 : 286 - 298