Collusive Model Poisoning Attack in Decentralized Federated Learning

被引:1
|
作者
Tan, Shouhong [1 ]
Hao, Fengrui [1 ]
Gu, Tianlong [1 ]
Li, Long [2 ]
Liu, Ming [3 ]
机构
[1] Jinan Univ, Engn Res Ctr Trustworthy AI, Minist Educ, Guangzhou 510632, Peoples R China
[2] Guilin Univ Elect Technol, Guangxi Key Lab Trusted Software, Guilin 541004, Peoples R China
[3] Guangzhou Res Inst Informat Technol, Guangzhou 510075, Peoples R China
关键词
Collusive attack; decentralized federated learning (DFL); industrial Internet of Things (IIoT); model poisoning; OPPORTUNITIES; CHALLENGES;
D O I
10.1109/TII.2023.3342901
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As a privacy-preserving machine learning paradigm, federated learning (FL) has attracted widespread attention from both academia and industry. Decentralized FL (DFL) overcomes the problems of untrusted aggregation server, single point of failure and poor scalability in traditional FL, making it suitable for industrial Internet of Things (IIoT). However, DFL provides more convenient conditions for malicious participants to launch attacks. This article focuses on the model poisoning attack in DFL for the first time, and proposes a novel attack method called collusive model poisoning attack (CMPA). To implement CMPA, we propose the dynamic adaptive construction mechanism, in which malicious participants can dynamically and adaptively construct malicious local models that meet distance constraints, reducing the convergence speed and accuracy of consensus models. Furthermore, we design the collusion-based attack enhancement strategies, where multiple participants can collude in the process of constructing malicious local models to improve the strength of attack. Empirical experiments conducted on MNIST and CIFAR-10 datasets reveal that CMPA significantly impacts the training process and results of DFL. Attack tests against representative defense methods show that CMPA not only invalidates statistical-based defenses but also skillfully overcomes performance-based methods, further proving its effectiveness and stealthiness. In addition, experiments based on practical IIoT scenario have also shown that CMPA can effectively disrupt system functionality.
引用
收藏
页码:5989 / 5999
页数:11
相关论文
共 50 条
  • [11] Defending against model poisoning attack in federated learning: A variance-minimization approach
    Xu, Hairuo
    Shu, Tao
    JOURNAL OF INFORMATION SECURITY AND APPLICATIONS, 2024, 82
  • [12] Poisoning Attack in Federated Learning using Generative Adversarial Nets
    Zhang, Jiale
    Chen, Junjun
    Wu, Di
    Chen, Bing
    Yu, Shui
    2019 18TH IEEE INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS/13TH IEEE INTERNATIONAL CONFERENCE ON BIG DATA SCIENCE AND ENGINEERING (TRUSTCOM/BIGDATASE 2019), 2019, : 374 - 380
  • [13] ADFL: A Poisoning Attack Defense Framework for Horizontal Federated Learning
    Guo, Jingjing
    Li, Haiyang
    Huang, Feiran
    Liu, Zhiquan
    Peng, Yanguo
    Li, Xinghua
    Ma, Jianfeng
    Menon, Varun G.
    Igorevich, Konstantin Kostromitin
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (10) : 6526 - 6536
  • [14] LoMar: A Local Defense Against Poisoning Attack on Federated Learning
    Li, Xingyu
    Qu, Zhe
    Zhao, Shangqing
    Tang, Bo
    Lu, Zhuo
    Liu, Yao
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (01) : 437 - 450
  • [15] APDPFL: Anti-Poisoning Attack Decentralized Privacy Enhanced Federated Learning Scheme for Flight Operation Data Sharing
    Li, Xinyan
    Zhao, Huimin
    Xu, Junjie
    Zhu, Guangtian
    Deng, Wu
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (12) : 19098 - 19109
  • [16] Semi-Targeted Model Poisoning Attack on Federated Learning via Backward Error Analysis
    Sun, Yuwei
    Ochiai, Hideya
    Sakuma, Jun
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [17] Attacking-Distance-Aware Attack: Semi-targeted Model Poisoning on Federated Learning
    Sun Y.
    Ochiai H.
    Sakuma J.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (02): : 925 - 939
  • [18] FedIMP: Parameter Importance-based Model Poisoning attack against Federated learning system
    Li, Xuan
    Wang, Naiyu
    Yuan, Shuai
    Guan, Zhitao
    COMPUTERS & SECURITY, 2024, 144
  • [19] HidAttack: An Effective and Undetectable Model Poisoning Attack to Federated Recommenders
    Ali, Waqar
    Umer, Khalid
    Zhou, Xiangmin
    Shao, Jie
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2025, 37 (03) : 1227 - 1240
  • [20] FEDGUARD: Selective Parameter Aggregation for Poisoning Attack Mitigation in Federated Learning
    Chelli, Melvin
    Prigent, Cedric
    Schubotz, Rene
    Costan, Alexandru
    Antoniu, Gabriel
    Cudennec, Loic
    Slusallek, Philipp
    2023 IEEE INTERNATIONAL CONFERENCE ON CLUSTER COMPUTING, CLUSTER, 2023, : 72 - 81