Collusive Model Poisoning Attack in Decentralized Federated Learning

被引:1
|
作者
Tan, Shouhong [1 ]
Hao, Fengrui [1 ]
Gu, Tianlong [1 ]
Li, Long [2 ]
Liu, Ming [3 ]
机构
[1] Jinan Univ, Engn Res Ctr Trustworthy AI, Minist Educ, Guangzhou 510632, Peoples R China
[2] Guilin Univ Elect Technol, Guangxi Key Lab Trusted Software, Guilin 541004, Peoples R China
[3] Guangzhou Res Inst Informat Technol, Guangzhou 510075, Peoples R China
关键词
Collusive attack; decentralized federated learning (DFL); industrial Internet of Things (IIoT); model poisoning; OPPORTUNITIES; CHALLENGES;
D O I
10.1109/TII.2023.3342901
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As a privacy-preserving machine learning paradigm, federated learning (FL) has attracted widespread attention from both academia and industry. Decentralized FL (DFL) overcomes the problems of untrusted aggregation server, single point of failure and poor scalability in traditional FL, making it suitable for industrial Internet of Things (IIoT). However, DFL provides more convenient conditions for malicious participants to launch attacks. This article focuses on the model poisoning attack in DFL for the first time, and proposes a novel attack method called collusive model poisoning attack (CMPA). To implement CMPA, we propose the dynamic adaptive construction mechanism, in which malicious participants can dynamically and adaptively construct malicious local models that meet distance constraints, reducing the convergence speed and accuracy of consensus models. Furthermore, we design the collusion-based attack enhancement strategies, where multiple participants can collude in the process of constructing malicious local models to improve the strength of attack. Empirical experiments conducted on MNIST and CIFAR-10 datasets reveal that CMPA significantly impacts the training process and results of DFL. Attack tests against representative defense methods show that CMPA not only invalidates statistical-based defenses but also skillfully overcomes performance-based methods, further proving its effectiveness and stealthiness. In addition, experiments based on practical IIoT scenario have also shown that CMPA can effectively disrupt system functionality.
引用
收藏
页码:5989 / 5999
页数:11
相关论文
共 50 条
  • [41] Securing federated learning: a defense strategy against targeted data poisoning attack
    Ansam Khraisat
    Ammar Alazab
    Moutaz Alazab
    Tony Jan
    Sarabjot Singh
    Md. Ashraf Uddin
    Discover Internet of Things, 5 (1):
  • [42] Securing Federated Learning against Overwhelming Collusive Attackers
    Ranjan, Priyesh
    Gupta, Ashish
    Coro, Federico
    Das, Sajal K.
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 1448 - 1453
  • [43] A Meta-Reinforcement Learning-Based Poisoning Attack Framework Against Federated Learning
    Zhou, Wei
    Zhang, Donglai
    Wang, Hongjie
    Li, Jinliang
    Jiang, Mingjian
    IEEE ACCESS, 2025, 13 : 28628 - 28644
  • [44] Survey on model inversion attack and defense in federated learning
    Wang, Dong
    Qin, Qianqian
    Guo, Kaitian
    Liu, Rongke
    Yan, Weipeng
    Ren, Yizhi
    Luo, Qingcai
    Shen, Yanzhao
    Tongxin Xuebao/Journal on Communications, 2023, 44 (11): : 94 - 109
  • [45] AIDFL: An Information-Driven Anomaly Detector for Data Poisoning in Decentralized Federated Learning
    Chen, Xiao
    Feng, Chao
    Wang, Shaohua
    IEEE ACCESS, 2025, 13 : 50017 - 50031
  • [46] A polynomial proxy model approach to verifiable decentralized federated learning
    Li, Tan
    Cheng, Samuel
    Chan, Tak Lam
    Hu, Haibo
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [47] Adaptive Model Aggregation for Decentralized Federated Learning in Vehicular Networks
    Movahedian, Mahtab
    Dolati, Mahdi
    Ghaderi, Majid
    2023 19TH INTERNATIONAL CONFERENCE ON NETWORK AND SERVICE MANAGEMENT, CNSM, 2023,
  • [48] Boosting Dynamic Decentralized Federated Learning by Diversifying Model Sources
    Su, Dongyuan
    Zhou, Yipeng
    Cui, Laizhong
    Guo, Song
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (04) : 1400 - 1413
  • [49] DART: : A Solution for decentralized federated learning model robustness analysis
    Feng, Chao
    Celdran, Alberto Huertas
    von der Assen, Jan
    Beltran, Enrique Tomas Martinez
    Bovet, Gerome
    Stiller, Burkhard
    ARRAY, 2024, 23
  • [50] Robust federated contrastive recommender system against targeted model poisoning attack
    Wei YUAN
    Chaoqun YANG
    Liang QU
    Guanhua YE
    Quoc Viet Hung NGUYEN
    Hongzhi YIN
    Science China(Information Sciences), 2025, 68 (04) : 50 - 65