Private Knowledge Transfer via Model Distillation with Generative Adversarial Networks

被引:1
|
作者
Gao, Di [1 ]
Zhuo, Cheng [1 ]
机构
[1] Zhejiang Univ, Hangzhou, Peoples R China
基金
国家重点研发计划; 美国国家科学基金会;
关键词
D O I
10.3233/FAIA200294
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The deployment of deep learning applications has to address the growing privacy concerns when using private and sensitive data for training. A conventional deep learning model is prone to privacy attacks that can recover the sensitive information of individuals from either model parameters or accesses to the target model. Recently, differential privacy that offers provable privacy guarantees has been proposed to train neural networks in a privacy-preserving manner to protect training data. However, many approaches tend to provide the worst case privacy guarantees for model publishing, inevitably impairing the accuracy of the trained models. In this paper, we present a novel private knowledge transfer strategy, where the private teacher trained on sensitive data is not publicly accessible but teaches a student to be publicly released. In particular, a three-player (teacher-student-discriminator) learning framework is proposed to achieve trade-off between utility and privacy, where the student acquires the distilled knowledge from the teacher and is trained with the discriminator to generate similar outputs as the teacher. We then integrate a differential privacy protection mechanism into the learning procedure, which enables a rigorous privacy budget for the training. The framework eventually allows student to be trained with only unlabelled public data and very few epochs, and hence prevents the exposure of sensitive training data, while ensuring model utility with a modest privacy budget. The experiments on MNIST, SVHN and CIFAR-10 datasets show that our students obtain the accuracy losses w.r.t teachers of 0.89%, 2.29%, 5.16%, respectively with the privacy bounds of (1.93, 10(-5)), (5.02, 10(-6)), (8.81, 10(-6)). When compared with the existing works [15, 20], the proposed work can achieve 582% accuracy loss improvement.
引用
收藏
页码:1794 / 1801
页数:8
相关论文
共 50 条
  • [1] PKDGAN: Private Knowledge Distillation With Generative Adversarial Networks
    Zhuo, Cheng
    Gao, Di
    Liu, Liangwei
    [J]. IEEE Transactions on Big Data, 2024, 10 (06): : 775 - 788
  • [2] Research on Knowledge Distillation of Generative Adversarial Networks
    Wang, Wei
    Zhang, Baohua
    Cui, Tao
    Chai, Yimeng
    Li, Yue
    [J]. 2021 DATA COMPRESSION CONFERENCE (DCC 2021), 2021, : 376 - 376
  • [3] KDGAN: Knowledge Distillation with Generative Adversarial Networks
    Wang, Xiaojie
    Zhang, Rui
    Sun, Yu
    Qi, Jianzhong
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [4] Application of Knowledge Distillation in Generative Adversarial Networks
    Zhang, Xu
    [J]. 2023 3RD ASIA-PACIFIC CONFERENCE ON COMMUNICATIONS TECHNOLOGY AND COMPUTER SCIENCE, ACCTCS, 2023, : 65 - 71
  • [5] Evolutionary Generative Adversarial Networks with Crossover Based Knowledge Distillation
    Li, Junjie
    Zhang, Junwei
    Gong, Xiaoyu
    Lu, Shuai
    [J]. 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [6] Differentially Private Generative Adversarial Networks with Model Inversion
    Chen, Dongjie
    Cheung, Sen-ching Samson
    Chuah, Chen-Nee
    Ozonoff, Sally
    [J]. 2021 IEEE INTERNATIONAL WORKSHOP ON INFORMATION FORENSICS AND SECURITY (WIFS), 2021, : 26 - 31
  • [7] Learning Informative and Private Representations via Generative Adversarial Networks
    Yang, Tsung-Yen
    Brinton, Christopher
    Mittal, Prateek
    Chiang, Mung
    Lan, Andrew
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2018, : 1534 - 1543
  • [8] Differentially private facial obfuscation via generative adversarial networks
    Croft, William L.
    Sack, Joerg-Ruediger
    Shi, Wei
    [J]. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2022, 129 : 358 - 379
  • [9] Memristive KDG-BNN: Memristive binary neural networks trained via knowledge distillation and generative adversarial networks
    Gao, Tongtong
    Zhou, Yue
    Duan, Shukai
    Hu, Xiaofang
    [J]. KNOWLEDGE-BASED SYSTEMS, 2022, 249
  • [10] Private Model Compression via Knowledge Distillation
    Wang, Ji
    Bao, Weidong
    Sun, Lichao
    Zhu, Xiaomin
    Cao, Bokai
    Yu, Philip S.
    [J]. THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 1190 - +