Collusive Model Poisoning Attack in Decentralized Federated Learning

被引:1
|
作者
Tan, Shouhong [1 ]
Hao, Fengrui [1 ]
Gu, Tianlong [1 ]
Li, Long [2 ]
Liu, Ming [3 ]
机构
[1] Jinan Univ, Engn Res Ctr Trustworthy AI, Minist Educ, Guangzhou 510632, Peoples R China
[2] Guilin Univ Elect Technol, Guangxi Key Lab Trusted Software, Guilin 541004, Peoples R China
[3] Guangzhou Res Inst Informat Technol, Guangzhou 510075, Peoples R China
关键词
Collusive attack; decentralized federated learning (DFL); industrial Internet of Things (IIoT); model poisoning; OPPORTUNITIES; CHALLENGES;
D O I
10.1109/TII.2023.3342901
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As a privacy-preserving machine learning paradigm, federated learning (FL) has attracted widespread attention from both academia and industry. Decentralized FL (DFL) overcomes the problems of untrusted aggregation server, single point of failure and poor scalability in traditional FL, making it suitable for industrial Internet of Things (IIoT). However, DFL provides more convenient conditions for malicious participants to launch attacks. This article focuses on the model poisoning attack in DFL for the first time, and proposes a novel attack method called collusive model poisoning attack (CMPA). To implement CMPA, we propose the dynamic adaptive construction mechanism, in which malicious participants can dynamically and adaptively construct malicious local models that meet distance constraints, reducing the convergence speed and accuracy of consensus models. Furthermore, we design the collusion-based attack enhancement strategies, where multiple participants can collude in the process of constructing malicious local models to improve the strength of attack. Empirical experiments conducted on MNIST and CIFAR-10 datasets reveal that CMPA significantly impacts the training process and results of DFL. Attack tests against representative defense methods show that CMPA not only invalidates statistical-based defenses but also skillfully overcomes performance-based methods, further proving its effectiveness and stealthiness. In addition, experiments based on practical IIoT scenario have also shown that CMPA can effectively disrupt system functionality.
引用
收藏
页码:5989 / 5999
页数:11
相关论文
共 50 条
  • [1] Deep Model Poisoning Attack on Federated Learning
    Zhou, Xingchen
    Xu, Ming
    Wu, Yiming
    Zheng, Ning
    FUTURE INTERNET, 2021, 13 (03)
  • [2] Mitigating Poisoning Attack in Federated Learning
    Uprety, Aashma
    Rawat, Danda B.
    2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021), 2021,
  • [3] FLAIR: Defense against Model Poisoning Attack in Federated Learning
    Sharma, Atul
    Chen, Wei
    Zhao, Joshua
    Qiu, Qiang
    Bagchi, Saurabh
    Chaterji, Somali
    PROCEEDINGS OF THE 2023 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, ASIA CCS 2023, 2023, : 553 - +
  • [4] Model poisoning attack in differential privacy-based federated learning
    Yang, Ming
    Cheng, Hang
    Chen, Fei
    Liu, Ximeng
    Wang, Meiqing
    Li, Xibin
    INFORMATION SCIENCES, 2023, 630 : 158 - 172
  • [5] Understanding Distributed Poisoning Attack in Federated Learning
    Cao, Di
    Chang, Shan
    Lin, Zhijian
    Liu, Guohua
    Sunt, Donghong
    2019 IEEE 25TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS), 2019, : 233 - 239
  • [6] FedRecAttack: Model Poisoning Attack to Federated Recommendation
    Rong, Dazhong
    Ye, Shuai
    Zhao, Ruoyan
    Yuen, Hon Ning
    Chen, Jianhai
    He, Qinming
    2022 IEEE 38TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2022), 2022, : 2643 - 2655
  • [7] Mitigate Data Poisoning Attack by Partially Federated Learning
    Dam, Khanh Huu The
    Legay, Axel
    18TH INTERNATIONAL CONFERENCE ON AVAILABILITY, RELIABILITY & SECURITY, ARES 2023, 2023,
  • [8] FLTracer: Accurate Poisoning Attack Provenance in Federated Learning
    Zhang, Xinyu
    Liu, Qingyu
    Ba, Zhongjie
    Hong, Yuan
    Zheng, Tianhang
    Lin, Feng
    Lu, Li
    Ren, Kui
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 9534 - 9549
  • [9] Federated Anomaly Analytics for Local Model Poisoning Attack
    Shi, Siping
    Hu, Chuang
    Wang, Dan
    Zhu, Yifei
    Han, Zhu
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2022, 40 (02) : 596 - 610
  • [10] Federated Learning for Decentralized DDoS Attack Detection in IoT Networks
    Alhasawi, Yaser
    Alghamdi, Salem
    IEEE ACCESS, 2024, 12 : 42357 - 42368