GAN-Driven Data Poisoning Attacks and Their Mitigation in Federated Learning Systems

被引:4
|
作者
Psychogyios, Konstantinos [1 ]
Velivassaki, Terpsichori-Helen [1 ]
Bourou, Stavroula [1 ]
Voulkidis, Artemis [1 ]
Skias, Dimitrios [2 ]
Zahariadis, Theodore [1 ,3 ]
机构
[1] Synelixis Solut SA, GR-34100 Chalkida, Greece
[2] Netco Intrasoft SA, GR-19002 Paiania, Greece
[3] Natl & Kapodistrian Univ Athens, Gen Dept, GR-15772 Athens, Greece
基金
欧盟地平线“2020”;
关键词
machine learning; federated learning; generative adversarial networks; data poisoning; label flipping; VULNERABILITY ASSESSMENT; TAXONOMY;
D O I
10.3390/electronics12081805
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) is an emerging machine learning technique where machine learning models are trained in a decentralized manner. The main advantage of this approach is the data privacy it provides because the data are not processed in a centralized device. Moreover, the local client models are aggregated on a server, resulting in a global model that has accumulated knowledge from all the different clients. This approach, however, is vulnerable to attacks because clients can be malicious or malicious actors may interfere within the network. In the first case, these types of attacks may refer to data or model poisoning attacks where the data or model parameters, respectively, may be altered. In this paper, we investigate the data poisoning attacks and, more specifically, the label-flipping case within a federated learning system. For an image classification task, we introduce two variants of data poisoning attacks, namely model degradation and targeted label attacks. These attacks are based on synthetic images generated by a generative adversarial network (GAN). This network is trained jointly by the malicious clients using a concatenated malicious dataset. Due to dataset sample limitations, the architecture and learning procedure of the GAN are adjusted accordingly. Through the experiments, we demonstrate that these types of attacks are effective in achieving their task and managing to fool common federated defenses (stealth). We also propose a mechanism to mitigate these attacks based on clean label training on the server side. In more detail, we see that the model degradation attack causes an accuracy degradation of up to 25%, while common defenses can only alleviate this for a percentage of similar to 5%. Similarly, the targeted label attack results in a misclassification of 56% compared to 2.5% when no attack takes place. Moreover, our proposed defense mechanism is able to mitigate these attacks.
引用
收藏
页数:17
相关论文
共 50 条
  • [41] On the Performance Impact of Poisoning Attacks on Load Forecasting in Federated Learning
    Qureshi, Naik Bakht Sania
    Kim, Dong-Hoon
    Lee, Jiwoo
    Lee, Eun-Kyu
    UBICOMP/ISWC '21 ADJUNCT: PROCEEDINGS OF THE 2021 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING AND PROCEEDINGS OF THE 2021 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS, 2021, : 64 - 66
  • [42] Dynamic defense against byzantine poisoning attacks in federated learning
    Rodriguez-Barroso, Nuria
    Martinez-Camara, Eugenio
    Victoria Luzon, M.
    Herrera, Francisco
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2022, 133 : 1 - 9
  • [43] FedEqual: Defending Model Poisoning Attacks in Heterogeneous Federated Learning
    Chen, Ling-Yuan
    Chiu, Te-Chuan
    Pang, Ai-Chun
    Cheng, Li-Chen
    2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [44] FLCert: Provably Secure Federated Learning Against Poisoning Attacks
    Cao, Xiaoyu
    Zhang, Zaixi
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 3691 - 3705
  • [45] Clean-label poisoning attacks on federated learning for IoT
    Yang, Jie
    Zheng, Jun
    Baker, Thar
    Tang, Shuai
    Tan, Yu-an
    Zhang, Quanxin
    EXPERT SYSTEMS, 2023, 40 (05)
  • [46] SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification
    Panda, Ashwinee
    Mahloujifar, Saeed
    Bhagoji, Arjun N.
    Chakraborty, Supriyo
    Mittal, Prateek
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
  • [47] Privacy-Preserving Detection of Poisoning Attacks in Federated Learning
    Muhr, Trent
    Zhang, Wensheng
    2022 19TH ANNUAL INTERNATIONAL CONFERENCE ON PRIVACY, SECURITY & TRUST (PST), 2022,
  • [48] DPFLA: Defending Private Federated Learning Against Poisoning Attacks
    Feng, Xia
    Cheng, Wenhao
    Cao, Chunjie
    Wang, Liangmin
    Sheng, Victor S.
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (04) : 1480 - 1491
  • [49] Secure and verifiable federated learning against poisoning attacks in IoMT
    Niu, Shufen
    Zhou, Xusheng
    Wang, Ning
    Kong, Weiying
    Chen, Lihua
    COMPUTERS & ELECTRICAL ENGINEERING, 2025, 122
  • [50] Targeted Clean-Label Poisoning Attacks on Federated Learning
    Patel, Ayushi
    Singh, Priyanka
    RECENT TRENDS IN IMAGE PROCESSING AND PATTERN RECOGNITION, RTIP2R 2022, 2023, 1704 : 231 - 243