Decaf: Data Distribution Decompose Attack Against Federated Learning

被引:0
|
作者
Dai, Zhiyang [1 ,2 ,3 ]
Gao, Yansong [4 ]
Zhou, Chunyi [5 ]
Fu, Anmin [1 ,2 ,3 ]
Zhang, Zhi [4 ]
Xue, Minhui [6 ]
Zheng, Yifeng [7 ]
Zhang, Yuqing [8 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Cyber Sci & Engn, Nanjing 210094, Peoples R China
[2] Xidian Univ, State Key Lab Integrated Serv Networks, Xian 710071, Peoples R China
[3] Minist Educ, Key Lab Cyberspace Secur, Zhengzhou 450001, Peoples R China
[4] Univ Western Australia, Dept Comp Sci & Software Engn, Perth, WA 6009, Australia
[5] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310058, Peoples R China
[6] CSIRO Data61, Sydney, NSW 2122, Australia
[7] Harbin Inst Technol, Shenzhen 518055, Peoples R China
[8] Univ Chinese Acad Sci, Natl Comp Network Intrus Protect Ctr, Beijing 101408, Peoples R China
基金
中国国家自然科学基金;
关键词
Data models; Training; Data privacy; Servers; Privacy; Generative adversarial networks; Distributed databases; Load modeling; Federated learning; Training data; privacy attack; data distribution decompose;
D O I
10.1109/TIFS.2024.3516545
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
In contrast to prevalent Federated Learning (FL) privacy inference techniques such as generative adversarial networks attacks, membership inference attacks, property inference attacks, and model inversion attacks, we devise an innovative privacy threat: the Data Distribution Decompose Attack on FL, termed Decaf. This attack enables an honest-but-curious FL server to meticulously profile the proportion of each class owned by the victim FL user, divulging sensitive information like local market item distribution and business competitiveness. The crux of Decaf lies in the profound observation that the magnitude of local model gradient changes closely mirrors the underlying data distribution, including the proportion of each class. Decaf addresses two crucial challenges: accurately identify the missing/null class(es) given by any victim user as a premise and then quantify the precise relationship between gradient changes and each remaining non-null class. Notably, Decaf operates stealthily, rendering it entirely passive and undetectable to victim users regarding the infringement of their data distribution privacy. Experimental validation on five benchmark datasets (MNIST, FASHION-MNIST, CIFAR-10, FER-2013, and SkinCancer) employing diverse model architectures, including customized convolutional networks, standardized VGG16, and ResNet18, demonstrates Decaf's efficacy. Results indicate its ability to accurately decompose local user data distribution, regardless of whether it is IID or non-IID distributed. Specifically, the dissimilarity measured using $L_{\infty }$ distance between the distribution decomposed by Decaf and ground truth is consistently below 5% when no null classes exist. Moreover, Decaf achieves 100% accuracy in determining any victim user's null classes, validated through formal proof.
引用
收藏
页码:405 / 420
页数:16
相关论文
共 50 条
  • [31] Challenges and Countermeasures of Federated Learning Data Poisoning Attack Situation Prediction
    Wu, Jianping
    Jin, Jiahe
    Wu, Chunming
    MATHEMATICS, 2024, 12 (06)
  • [32] Data Poisoning Attack Based on Privacy Reasoning and Countermeasure in Federated Learning
    Lv, Jiguang
    Xu, Shuchun
    Ling, Yi
    Man, Dapeng
    Han, Shuai
    Yang, Wu
    2023 19TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING, MSN 2023, 2023, : 472 - 479
  • [33] Exploring Clustered Federated Learning's Vulnerability against Property Inference Attack
    Kim, Hyunjun
    Cho, Yungi
    Lee, Younghan
    Bae, Ho
    Paek, Yunheung
    PROCEEDINGS OF THE 26TH INTERNATIONAL SYMPOSIUM ON RESEARCH IN ATTACKS, INTRUSIONS AND DEFENSES, RAID 2023, 2023, : 236 - 249
  • [34] Privacy protection against attack scenario of federated learning using internet of things
    Yadav, Kusum
    Kareri, Elham
    Alotaibi, Shoayee Dlaim
    Viriyasitavat, Wattana
    Dhiman, Gaurav
    Kaur, Amandeep
    ENTERPRISE INFORMATION SYSTEMS, 2023, 17 (09)
  • [35] An empirical analysis of image augmentation against model inversion attack in federated learning
    Shin, Seunghyeon
    Boyapati, Mallika
    Suo, Kun
    Kang, Kyungtae
    Son, Junggab
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2023, 26 (01): : 349 - 366
  • [36] LFGurad: A Defense against Label Flipping Attack in Federated Learning for Vehicular Network
    Sameera, K. M.
    Vinod, P.
    Rehiman, K. A. Rafidha
    Conti, Mauro
    COMPUTER NETWORKS, 2024, 254
  • [37] An empirical analysis of image augmentation against model inversion attack in federated learning
    Seunghyeon Shin
    Mallika Boyapati
    Kun Suo
    Kyungtae Kang
    Junggab Son
    Cluster Computing, 2023, 26 : 349 - 366
  • [38] Federated Learning with Data-Agnostic Distribution Fusion
    Duan, Jian-hui
    Li, Wenzhong
    Lou, Derun
    Li, Ruichen
    Lu, Sanglu
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 8074 - 8083
  • [39] The Impact of Data Distribution on Fairness and Robustness in Federated Learning
    Ozdayi, Mustafa Safa
    Kantarcioglu, Murat
    2021 THIRD IEEE INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS AND APPLICATIONS (TPS-ISA 2021), 2021, : 191 - 196
  • [40] A Meta-Reinforcement Learning-Based Poisoning Attack Framework Against Federated Learning
    Zhou, Wei
    Zhang, Donglai
    Wang, Hongjie
    Li, Jinliang
    Jiang, Mingjian
    IEEE ACCESS, 2025, 13 : 28628 - 28644