Fast and Accurate Deep Leakage from Gradients Based on Wasserstein Distance

被引:5
|
作者
He, Xing [1 ,2 ]
Peng, Changgen [1 ,3 ]
Tan, Weijie [1 ,3 ,4 ]
机构
[1] Guizhou Univ, Coll Comp Sci & Technol, State Key Lab Publ Big Data, Guiyang 550025, Peoples R China
[2] Guizhou Minzu Univ, Guiyang 550025, Peoples R China
[3] Guizhou Univ, Guizhou Big Data Acad, Guiyang 550025, Peoples R China
[4] Guizhou Univ, Key Lab Adv Mfg Technol, Minist Educ, Guiyang 550025, Peoples R China
基金
中国国家自然科学基金;
关键词
NEURAL-NETWORKS;
D O I
10.1155/2023/5510329
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Shared gradients are widely used to protect the private information of training data in distributed machine learning systems. However, Deep Leakage from Gradients (DLG) research has found that private training data can be recovered from shared gradients. The DLG method still has some issues such as the "Exploding Gradient," low attack success rate, and low fidelity of recovered data. In this study, a Wasserstein DLG method, named WDLG, is proposed; the theoretical analysis shows that under the premise that the output layer of the model has a "bias" term, predicting the "label" of the data by whether the "bias" is "negative" or not is independent of the approximation of the shared gradient, and thus, the label of the data can be recovered with 100% accuracy. In the proposed method, the Wasserstein distance is used to calculate the error loss between the shared gradient and the virtual gradient, which improves model training stability, solves the "Exploding Gradient" phenomenon, and improves the fidelity of the recovered data. Moreover, a large learning rate strategy is designed to improve model training convergence speed in-depth. Finally, the WDLG method is validated on datasets from MNIST, Fashion MNIST, SVHN, CIFAR-100, and LFW. Experiments results show that the proposed WDLG method provides more stable updates for virtual data, a higher attack success rate, faster model convergence, higher image fidelity during recovery, and support for designing large learning rate strategies.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Persistent homology based Wasserstein distance for graph networks
    Babu, Archana
    John, Sunil Jacob
    HACETTEPE JOURNAL OF MATHEMATICS AND STATISTICS, 2025, 54 (01): : 90 - 114
  • [22] An extreme learning machine based fast and accurate adaptive distance relaying scheme
    Dubey, Rahul
    Samantaray, S. R.
    Panigrahi, B. K.
    INTERNATIONAL JOURNAL OF ELECTRICAL POWER & ENERGY SYSTEMS, 2015, 73 : 1002 - 1014
  • [23] Target detection based on generalized Bures–Wasserstein distance
    Zhizhong Huang
    Lin Zheng
    EURASIP Journal on Advances in Signal Processing, 2023
  • [24] Local Histogram Based Segmentation Using the Wasserstein Distance
    Ni, Kangyu
    Bresson, Xavier
    Chan, Tony
    Esedoglu, Selim
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2009, 84 (01) : 97 - 111
  • [25] A new interval data distance based on the Wasserstein metric
    Verde, Rosanna
    Irpino, Antonio
    DATA ANALYSIS, MACHINE LEARNING AND APPLICATIONS, 2008, : 705 - 712
  • [26] Online machine learning algorithms based on Wasserstein distance
    Li Z.
    Zhang Z.-H.
    Zhongguo Kexue Jishu Kexue/Scientia Sinica Technologica, 2023, 53 (07): : 1031 - 1042
  • [27] Wasserstein-Distance-Based Gaussian Mixture Reduction
    Assa, Akbar
    Plataniotis, Konstantinos N.
    IEEE SIGNAL PROCESSING LETTERS, 2018, 25 (10) : 1465 - 1469
  • [28] Evaluating a stochastic parametrization for a fast-slow system using the Wasserstein distance
    Vissio, Gabriele
    Lucarini, Valerio
    NONLINEAR PROCESSES IN GEOPHYSICS, 2018, 25 (02) : 413 - 427
  • [29] Improving Human Image Synthesis with Residual Fast Fourier Transformation and Wasserstein Distance
    Wu, Jianhan
    Si, Shijing
    Wang, Jianzong
    Xiao, Jing
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [30] Fast Approximation of the Sliced-Wasserstein Distance Using Concentration of Random Projections
    Nadjahi, Kimia
    Durmus, Alain
    Jacob, Pierre E.
    Badeau, Roland
    Simsekli, Umut
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34