Decaf: Data Distribution Decompose Attack Against Federated Learning

被引:0
|
作者
Dai, Zhiyang [1 ,2 ,3 ]
Gao, Yansong [4 ]
Zhou, Chunyi [5 ]
Fu, Anmin [1 ,2 ,3 ]
Zhang, Zhi [4 ]
Xue, Minhui [6 ]
Zheng, Yifeng [7 ]
Zhang, Yuqing [8 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Cyber Sci & Engn, Nanjing 210094, Peoples R China
[2] Xidian Univ, State Key Lab Integrated Serv Networks, Xian 710071, Peoples R China
[3] Minist Educ, Key Lab Cyberspace Secur, Zhengzhou 450001, Peoples R China
[4] Univ Western Australia, Dept Comp Sci & Software Engn, Perth, WA 6009, Australia
[5] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310058, Peoples R China
[6] CSIRO Data61, Sydney, NSW 2122, Australia
[7] Harbin Inst Technol, Shenzhen 518055, Peoples R China
[8] Univ Chinese Acad Sci, Natl Comp Network Intrus Protect Ctr, Beijing 101408, Peoples R China
基金
中国国家自然科学基金;
关键词
Data models; Training; Data privacy; Servers; Privacy; Generative adversarial networks; Distributed databases; Load modeling; Federated learning; Training data; privacy attack; data distribution decompose;
D O I
10.1109/TIFS.2024.3516545
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
In contrast to prevalent Federated Learning (FL) privacy inference techniques such as generative adversarial networks attacks, membership inference attacks, property inference attacks, and model inversion attacks, we devise an innovative privacy threat: the Data Distribution Decompose Attack on FL, termed Decaf. This attack enables an honest-but-curious FL server to meticulously profile the proportion of each class owned by the victim FL user, divulging sensitive information like local market item distribution and business competitiveness. The crux of Decaf lies in the profound observation that the magnitude of local model gradient changes closely mirrors the underlying data distribution, including the proportion of each class. Decaf addresses two crucial challenges: accurately identify the missing/null class(es) given by any victim user as a premise and then quantify the precise relationship between gradient changes and each remaining non-null class. Notably, Decaf operates stealthily, rendering it entirely passive and undetectable to victim users regarding the infringement of their data distribution privacy. Experimental validation on five benchmark datasets (MNIST, FASHION-MNIST, CIFAR-10, FER-2013, and SkinCancer) employing diverse model architectures, including customized convolutional networks, standardized VGG16, and ResNet18, demonstrates Decaf's efficacy. Results indicate its ability to accurately decompose local user data distribution, regardless of whether it is IID or non-IID distributed. Specifically, the dissimilarity measured using $L_{\infty }$ distance between the distribution decomposed by Decaf and ground truth is consistently below 5% when no null classes exist. Moreover, Decaf achieves 100% accuracy in determining any victim user's null classes, validated through formal proof.
引用
收藏
页码:405 / 420
页数:16
相关论文
共 50 条
  • [21] Dual-domain based backdoor attack against federated learning
    Li, Guorui
    Chang, Runxing
    Wang, Ying
    Wang, Cong
    NEUROCOMPUTING, 2025, 623
  • [22] LFighter: Defending against the label-flipping attack in federated learning
    Jebreel, Najeeb Moharram
    Domingo-Ferrer, Josep
    Sanchez, David
    Blanco-Justicia, Alberto
    NEURAL NETWORKS, 2024, 170 : 111 - 126
  • [23] Research on Block Chain Defense against Malicious Attack in Federated Learning
    Wu, Yiming
    Lu, Gehao
    Fu, Liyu
    Peng, Mao
    2021 THE 3RD INTERNATIONAL CONFERENCE ON BLOCKCHAIN TECHNOLOGY, ICBCT 2021, 2021, : 67 - 72
  • [24] AAIA: an efficient aggregation scheme against inverting attack for federated learning
    Zhen Yang
    Shisong Yang
    Yunbo Huang
    José-Fernán Martínez
    Lourdes López
    Yuwen Chen
    International Journal of Information Security, 2023, 22 : 919 - 930
  • [25] Improved gradient leakage attack against compressed gradients in federated learning
    Ding, Xuyang
    Liu, Zhengqi
    You, Xintong
    Li, Xiong
    Vasilakos, Athhanasios V.
    NEUROCOMPUTING, 2024, 608
  • [26] Poisoning-Assisted Property Inference Attack Against Federated Learning
    Wang, Zhibo
    Huang, Yuting
    Song, Mengkai
    Wu, Libing
    Xue, Feng
    Ren, Kui
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (04) : 3328 - 3340
  • [27] Analyzing User-Level Privacy Attack Against Federated Learning
    Song, Mengkai
    Wang, Zhibo
    Zhang, Zhifei
    Song, Yang
    Wang, Qian
    Ren, Ju
    Qi, Hairong
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2020, 38 (10) : 2430 - 2444
  • [28] Poisoning with Cerberus: Stealthy and Colluded Backdoor Attack against Federated Learning
    Lyu, Xiaoting
    Han, Yufei
    Wang, Wei
    Liu, Jingkai
    Wang, Bin
    Liu, Jiqiang
    Zhang, Xiangliang
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 7, 2023, : 9020 - 9028
  • [29] Pocket Diagnosis: Secure Federated Learning Against Poisoning Attack in the Cloud
    Ma, Zhuoran
    Ma, Jianfeng
    Miao, Yinbin
    Liu, Ximeng
    Choo, Kim-Kwang Raymond
    Deng, Robert H.
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2022, 15 (06) : 3429 - 3442
  • [30] Data Poisoning Attacks Against Federated Learning Systems
    Tolpegin, Vale
    Truex, Stacey
    Gursoy, Mehmet Emre
    Liu, Ling
    COMPUTER SECURITY - ESORICS 2020, PT I, 2020, 12308 : 480 - 501