GANobfuscator: Mitigating Information Leakage Under GAN via Differential Privacy

被引:126
|
作者
Xu, Chugui [1 ]
Ren, Ju [1 ]
Zhang, Deyu [1 ]
Zhang, Yaoxue [1 ]
Qin, Zhan [2 ]
Ren, Kui [2 ]
机构
[1] Cent South Univ, Sch Comp Sci & Engn, Changsha 410083, Hunan, Peoples R China
[2] Zhejiang Univ, Inst Cyberspace Res, Hangzhou 310058, Zhejiang, Peoples R China
基金
美国国家科学基金会;
关键词
Information leakage; generative adversarial network; deep learning; differential privacy; NOISE;
D O I
10.1109/TIFS.2019.2897874
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
By learning generative models of semantic-rich data distributions from samples, generative adversarial network (GAN) has recently attracted intensive research interests due to its excellent empirical performance as a generative model. The model is used to estimate the underlying distribution of a dataset and randomly generate realistic samples according to their estimated distribution. However, GANs can easily remember training samples due to the high model complexity of deep networks. When GANs are applied to private or sensitive data, the concentration of distribution may divulge some critical information. It consequently requires new technological advances to mitigate the information leakage under GANs. To address this issue, we propose GANobfuscator, a differentially private GAN, which can achieve differential privacy under GANs by adding carefully designed noise to gradients during the learning procedure. With GANobfuscator, analysts are able to generate an unlimited amount of synthetic data for arbitrary analysis tasks without disclosing the privacy of training data. Moreover, we theoretically prove that GANobfuscator can provide strict privacy guarantee with differential privacy. In addition, we develop a gradient-pruning strategy for GANobfuscator to improve the scalability and stability of data training. Through extensive experimental evaluation on benchmark datasets, we demonstrate that GANobfuscator can produce high-quality generated data and retain desirable utility under practical privacy budgets.
引用
收藏
页码:2358 / 2371
页数:14
相关论文
共 50 条
  • [31] MP-CLF: An effective Model-Preserving Collaborative deep Learning Framework for mitigating data leakage under the GAN
    Chen, Zhenzhu
    Wu, Jie
    Fu, Anmin
    Su, Mang
    Deng, Robert H.
    KNOWLEDGE-BASED SYSTEMS, 2023, 270
  • [32] A Data Leakage Traceability Scheme Based on Differential Privacy and Fingerprint
    Wang, Mingyong
    Zheng, Shuli
    2024 3RD INTERNATIONAL CONFERENCE ON IMAGE PROCESSING AND MEDIA COMPUTING, ICIPMC 2024, 2024, : 327 - 334
  • [33] In Differential Privacy, There is Truth: On Vote Leakage in Ensemble Private Learning
    Wang, Jiaqi
    Schuster, Roei
    Shumailov, Ilia
    Lie, David
    Papernot, Nicolas
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [34] Hypothesis Testing under Maximal Leakage Privacy Constraints
    Liao, Jiachun
    Sankar, Lalitha
    Calmon, Flavio P.
    Tan, Vincent Y. F.
    2017 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2017, : 779 - 783
  • [35] Measures of Information Leakage for Incomplete Statistical Information: Application to a Binary Privacy Mechanism
    Sakib, Shahnewaz Karim
    Amariucai, George T.
    Guan, Yong
    ACM TRANSACTIONS ON PRIVACY AND SECURITY, 2023, 26 (04)
  • [36] Distributed Differential Privacy via Shuffling
    Cheu, Albert
    Smith, Adam
    Ullman, Jonathan
    Zeber, David
    Zhilyaev, Maxim
    ADVANCES IN CRYPTOLOGY - EUROCRYPT 2019, PT I, 2019, 11476 : 375 - 403
  • [37] Differential Privacy via Wavelet Transforms
    Xiao, Xiaokui
    Wang, Guozhang
    Gehrke, Johannes
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2011, 23 (08) : 1200 - 1214
  • [38] Mechanism design via differential privacy
    McSherry, Frank
    Talwar, Kunal
    48TH ANNUAL IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE, PROCEEDINGS, 2007, : 94 - 103
  • [39] Label differential privacy via clustering
    Esfandiari, Hossein
    Mirrokni, Vahab
    Syed, Umar
    Vassilvitskii, Sergei
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
  • [40] Differential Privacy via Wavelet Transforms
    Xiao, Xiaokui
    Wang, Guozhang
    Gehrke, Johannes
    26TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING ICDE 2010, 2010, : 225 - 236