Decaf: Data Distribution Decompose Attack Against Federated Learning

被引:0
|
作者
Dai, Zhiyang [1 ,2 ,3 ]
Gao, Yansong [4 ]
Zhou, Chunyi [5 ]
Fu, Anmin [1 ,2 ,3 ]
Zhang, Zhi [4 ]
Xue, Minhui [6 ]
Zheng, Yifeng [7 ]
Zhang, Yuqing [8 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Cyber Sci & Engn, Nanjing 210094, Peoples R China
[2] Xidian Univ, State Key Lab Integrated Serv Networks, Xian 710071, Peoples R China
[3] Minist Educ, Key Lab Cyberspace Secur, Zhengzhou 450001, Peoples R China
[4] Univ Western Australia, Dept Comp Sci & Software Engn, Perth, WA 6009, Australia
[5] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310058, Peoples R China
[6] CSIRO Data61, Sydney, NSW 2122, Australia
[7] Harbin Inst Technol, Shenzhen 518055, Peoples R China
[8] Univ Chinese Acad Sci, Natl Comp Network Intrus Protect Ctr, Beijing 101408, Peoples R China
基金
中国国家自然科学基金;
关键词
Data models; Training; Data privacy; Servers; Privacy; Generative adversarial networks; Distributed databases; Load modeling; Federated learning; Training data; privacy attack; data distribution decompose;
D O I
10.1109/TIFS.2024.3516545
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
In contrast to prevalent Federated Learning (FL) privacy inference techniques such as generative adversarial networks attacks, membership inference attacks, property inference attacks, and model inversion attacks, we devise an innovative privacy threat: the Data Distribution Decompose Attack on FL, termed Decaf. This attack enables an honest-but-curious FL server to meticulously profile the proportion of each class owned by the victim FL user, divulging sensitive information like local market item distribution and business competitiveness. The crux of Decaf lies in the profound observation that the magnitude of local model gradient changes closely mirrors the underlying data distribution, including the proportion of each class. Decaf addresses two crucial challenges: accurately identify the missing/null class(es) given by any victim user as a premise and then quantify the precise relationship between gradient changes and each remaining non-null class. Notably, Decaf operates stealthily, rendering it entirely passive and undetectable to victim users regarding the infringement of their data distribution privacy. Experimental validation on five benchmark datasets (MNIST, FASHION-MNIST, CIFAR-10, FER-2013, and SkinCancer) employing diverse model architectures, including customized convolutional networks, standardized VGG16, and ResNet18, demonstrates Decaf's efficacy. Results indicate its ability to accurately decompose local user data distribution, regardless of whether it is IID or non-IID distributed. Specifically, the dissimilarity measured using $L_{\infty }$ distance between the distribution decomposed by Decaf and ground truth is consistently below 5% when no null classes exist. Moreover, Decaf achieves 100% accuracy in determining any victim user's null classes, validated through formal proof.
引用
收藏
页码:405 / 420
页数:16
相关论文
共 50 条
  • [41] Virtual Homogeneity Learning: Defending against Data Heterogeneity in Federated Learning
    Tang, Zhenheng
    Zhang, Yonggang
    Shi, Shaohuai
    He, Xin
    Han, Bo
    Chu, Xiaowen
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [42] Mitigating Poisoning Attack in Federated Learning
    Uprety, Aashma
    Rawat, Danda B.
    2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021), 2021,
  • [43] Learning to Attack Federated Learning: A Model-based Reinforcement Learning Attack Framework
    Li, Henger
    Sun, Xiaolin
    Zheng, Zizhan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [44] A Novel Data Poisoning Attack in Federated Learning based on Inverted Loss Function
    Gupta, Prajjwal
    Yadav, Krishna
    Gupta, Brij B.
    Alazab, Mamoun
    Gadekallu, Thippa Reddy
    COMPUTERS & SECURITY, 2023, 130
  • [45] Defending Against Data and Model Backdoor Attacks in Federated Learning
    Wang, Hao
    Mu, Xuejiao
    Wang, Dong
    Xu, Qiang
    Li, Kaiju
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (24): : 39276 - 39294
  • [46] FedFed: Feature Distillation against Data Heterogeneity in Federated Learning
    Yang, Zhiqin
    Zhang, Yonggang
    Zheng, Yu
    Tian, Xinmei
    Peng, Hao
    Liu, Tongliang
    Han, Bo
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [47] Backdoor Attack Against Split Neural Network-Based Vertical Federated Learning
    He, Ying
    Shen, Zhili
    Hua, Jingyu
    Dong, Qixuan
    Niu, Jiacheng
    Tong, Wei
    Huang, Xu
    Li, Chen
    Zhong, Sheng
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 748 - 763
  • [48] FedDAA: a robust federated learning framework to protect privacy and defend against adversarial attack
    Lu, Shiwei
    Li, Ruihu
    Liu, Wenbin
    FRONTIERS OF COMPUTER SCIENCE, 2024, 18 (02)
  • [49] Evil vs evil: using adversarial examples to against backdoor attack in federated learning
    Tao Liu
    Mingjun Li
    Haibin Zheng
    Zhaoyan Ming
    Jinyin Chen
    Multimedia Systems, 2023, 29 : 553 - 568
  • [50] Cross the Chasm: Scalable Privacy-Preserving Federated Learning against Poisoning Attack
    Li, Yiran
    Hu, Guiqiang
    Liu, Xiaoyuan
    Ying, Zuobin
    2021 18TH INTERNATIONAL CONFERENCE ON PRIVACY, SECURITY AND TRUST (PST), 2021,