Heterogeneous Gaussian Mechanism: Preserving Differential Privacy in Deep Learning with Provable Robustness

被引:0
|
作者
NhatHai Phan [1 ]
Vu, Minh N. [5 ]
Liu, Yang [1 ]
Jin, Ruoming [2 ]
Dou, Dejing [3 ]
Wu, Xintao [4 ]
Thai, My T. [5 ]
机构
[1] New Jersey Inst Technol, Newark, NJ 07102 USA
[2] Kent State Univ, Kent, OH 44240 USA
[3] Univ Oregon, Eugene, OR 97403 USA
[4] Univ Arkansas, Fayetteville, AR 72701 USA
[5] Univ Florida, Gainesville, FL USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we propose a novel Heterogeneous Gaussian Mechanism (HGM) to preserve differential privacy in deep neural networks, with provable robustness against adversarial examples. We first relax the constraint of the privacy budget in the traditional Gaussian Mechanism from (0, 1] to (0, infty), with a new bound of the noise scale to preserve differential privacy. The noise in our mechanism can be arbitrarily redistributed, offering a distinctive ability to address the trade-off between model utility and privacy loss. To derive provable robustness, our HGM is applied to inject Gaussian noise into the first hidden layer. Then, a tighter robustness bound is proposed. Theoretical analysis and thorough evaluations show that our mechanism notably improves the robustness of differentially private deep neural networks, compared with baseline approaches, under a variety of model attacks.
引用
收藏
页码:4753 / 4759
页数:7
相关论文
共 50 条
  • [1] Differential Privacy Preserving Deep Learning in Healthcare
    Wu, Xintao
    2017 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM), 2017, : 8 - 8
  • [2] A differential privacy-preserving deep learning caching framework for heterogeneous communication network systems
    Wang, Huanhuan
    Zhang, Xiao
    Xia, Youbing
    Wu, Xiang
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (12) : 11142 - 11166
  • [3] Safety and Robustness for Deep Learning with Provable Guarantees
    Kwiatkowska, Marta
    2020 35TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING (ASE 2020), 2020, : 1 - 3
  • [4] Safety and Robustness for Deep Learning with Provable Guarantees
    Kwiatkowska, Marta
    ESEC/FSE'2019: PROCEEDINGS OF THE 2019 27TH ACM JOINT MEETING ON EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, 2019, : 2 - 2
  • [5] A Pragmatic Privacy-Preserving Deep Learning Framework Satisfying Differential Privacy
    Dang T.K.
    Tran-Truong P.T.
    SN Computer Science, 5 (1)
  • [6] Privacy-Preserving Classification on Deep Learning with Exponential Mechanism
    Ju, Quan
    Xia, Rongqing
    Li, Shuhong
    Zhang, Xiaojian
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2024, 17 (01)
  • [7] Privacy-Preserving Classification on Deep Learning with Exponential Mechanism
    Quan Ju
    Rongqing Xia
    Shuhong Li
    Xiaojian Zhang
    International Journal of Computational Intelligence Systems, 17
  • [8] Differential privacy: a privacy cloak for preserving utility in heterogeneous datasets
    Saurabh Gupta
    Arun Balaji Buduru
    Ponnurangam Kumaraguru
    CSI Transactions on ICT, 2022, 10 (1) : 25 - 36
  • [9] Editorial: Privacy-Preserving Deep Heterogeneous View Perception for Data Learning
    Li, Peng
    FRONTIERS IN NEUROROBOTICS, 2022, 16
  • [10] Editorial: Privacy-Preserving Deep Heterogeneous View Perception for Data Learning
    Li, Peng
    Frontiers in Neurorobotics, 2022, 16