Poisoning Attack in Federated Learning using Generative Adversarial Nets

被引:128
|
作者
Zhang, Jiale [1 ]
Chen, Junjun [2 ]
Wu, Di [3 ,4 ]
Chen, Bing [1 ]
Yu, Shui [3 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing 211106, Peoples R China
[2] Beijing Univ Chem Technol, Coll Informat Sci & Technol, Beijing 100029, Peoples R China
[3] Univ Technol Sydney, Sch Software, Sydney, NSW 2007, Australia
[4] Univ Technol Sydney, Ctr Artificial Intelligence, Sydney, NSW 2007, Australia
基金
中国国家自然科学基金;
关键词
Federated learning; poisoning attack; generative adversarial nets; security; privacy;
D O I
10.1109/TrustCom/BigDataSE.2019.00057
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning is a novel distributed learning framework, where the deep learning model is trained in a collaborative manner among thousands of participants. The shares between server and participants are only model parameters, which prevent the server from direct access to the private training data. However, we notice that the federated learning architecture is vulnerable to an active attack from insider participants, called poisoning attack, where the attacker can act as a benign participant in federated learning to upload the poisoned update to the server so that he can easily affect the performance of the global model. In this work, we study and evaluate a poisoning attack in federated learning system based on generative adversarial nets (GAN). That is, an attacker first acts as a benign participant and stealthily trains a GAN to mimic prototypical samples of the other participants' training set which does not belong to the attacker. Then these generated samples will be fully controlled by the attacker to generate the poisoning updates, and the global model will be compromised by the attacker with uploading the scaled poisoning updates to the server. In our evaluation, we show that the attacker in our construction can successfully generate samples of other benign participants using GAN and the global model performs more than 80% accuracy on both poisoning tasks and main tasks.
引用
收藏
页码:374 / 380
页数:7
相关论文
共 50 条
  • [21] Collusive Model Poisoning Attack in Decentralized Federated Learning
    Tan, Shouhong
    Hao, Fengrui
    Gu, Tianlong
    Li, Long
    Liu, Ming
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (04) : 5989 - 5999
  • [22] Mitigate Data Poisoning Attack by Partially Federated Learning
    Dam, Khanh Huu The
    Legay, Axel
    18TH INTERNATIONAL CONFERENCE ON AVAILABILITY, RELIABILITY & SECURITY, ARES 2023, 2023,
  • [23] FLTracer: Accurate Poisoning Attack Provenance in Federated Learning
    Zhang, Xinyu
    Liu, Qingyu
    Ba, Zhongjie
    Hong, Yuan
    Zheng, Tianhang
    Lin, Feng
    Lu, Li
    Ren, Kui
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 9534 - 9549
  • [24] A generative adversarial network-based client-level handwriting forgery attack in federated learning scenario
    Shi, Lei
    Wu, Han
    Ding, Xu
    Xu, Hao
    Pan, Sinan
    EXPERT SYSTEMS, 2025, 42 (02)
  • [25] Generative Adversarial Ranking Nets
    Yao, Yinghua
    Pan, Yuangang
    Li, Jing
    Tsang, Ivor W.
    Yao, Xin
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25 : 1 - 35
  • [26] An Effective Method to Generate Simulated Attack Data Based on Generative Adversarial Nets
    Xie, Huihui
    Lv, Kun
    Hu, Changzhen
    2018 17TH IEEE INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS (IEEE TRUSTCOM) / 12TH IEEE INTERNATIONAL CONFERENCE ON BIG DATA SCIENCE AND ENGINEERING (IEEE BIGDATASE), 2018, : 1777 - 1784
  • [27] Lipschitz Generative Adversarial Nets
    Zhou, Zhiming
    Liang, Jiadong
    Song, Yuxuan
    Yu, Lantao
    Wang, Hongwei
    Zhang, Weinan
    Yu, Yong
    Zhang, Zhihua
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [28] Triple Generative Adversarial Nets
    Li, Chongxuan
    Xu, Kun
    Zhu, Jun
    Zhang, Bo
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [29] Differentially Privacy-Preserving Federated Learning Using Wasserstein Generative Adversarial Network
    Wan, Yichen
    Qu, Youyang
    Gao, Longxiang
    Xiang, Yong
    26TH IEEE SYMPOSIUM ON COMPUTERS AND COMMUNICATIONS (IEEE ISCC 2021), 2021,
  • [30] Private and heterogeneous personalized hierarchical federated learning using Conditional Generative Adversarial networks
    Afzali, Afsaneh
    Shamsinejadbabaki, Pirooz
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 276