On the Utility and Protection of Optimization with Differential Privacy and Classic Regularization Techniques

被引:0
|
作者
Lomurno, Eugenio [1 ]
Matteucci, Matteo [1 ]
机构
[1] Politecn Milan, Milan, Italy
基金
欧盟地平线“2020”;
关键词
Differential privacy; Regularization; Membership inference; Model inversion;
D O I
10.1007/978-3-031-25599-1_17
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Nowadays, owners and developers of deep learning models must consider stringent privacy-preservation rules of their training data, usually crowd-sourced and retaining sensitive information. The most widely adopted method to enforce privacy guarantees of a deep learning model nowadays relies on optimization techniques enforcing differential privacy. According to the literature, this approach has proven to be a successful defence against several models' privacy attacks, but its downside is a substantial degradation of the models' performance. In this work, we compare the effectiveness of the differentially-private stochastic gradient descent (DP-SGD) algorithm against standard optimization practices with regularization techniques. We analyze the resulting models' utility, training performance, and the effectiveness of membership inference and model inversion attacks against the learned models. Finally, we discuss differential privacy's flaws and limits and empirically demonstrate the often superior privacy-preserving properties of dropout and l2-regularization.
引用
收藏
页码:223 / 238
页数:16
相关论文
共 50 条
  • [1] Privacy Regularization: Joint Privacy-Utility Optimization in Language Models
    Mireshghallah, Fatemehsadat
    Inan, Huseyin A.
    Hasegawa, Marcello
    Ruhle, Victor
    Berg-Kirkpatrick, Taylor
    Sim, Robert
    [J]. 2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 3799 - 3807
  • [2] Utility Optimization of Federated Learning with Differential Privacy
    Zhao, Jianzhe
    Mao, Keming
    Huang, Chenxi
    Zeng, Yuyang
    [J]. DISCRETE DYNAMICS IN NATURE AND SOCIETY, 2021, 2021
  • [3] Distributed Differential Utility/Cost Analysis for Privacy Protection
    Shen, Xicong
    Liu, Ying
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2019, 26 (10) : 1436 - 1440
  • [4] AdaBias: An Optimization Method With Bias Correction for Differential Privacy Protection
    Zhao, Xuanyu
    Hu, Tao
    Li, Jun
    Mao, Chunxia
    [J]. IEEE ACCESS, 2022, 10 : 107010 - 107021
  • [5] Differential privacy protection scheme supporting high data utility and fault tolerance
    Zhang L.
    Zhang J.
    [J]. Zhejiang Daxue Xuebao (Gongxue Ban)/Journal of Zhejiang University (Engineering Science), 2019, 53 (08): : 1496 - 1505
  • [6] Individual Differential Privacy: A Utility-Preserving Formulation of Differential Privacy Guarantees
    Soria-Comas, Jordi
    Domingo-Ferrer, Josep
    Sanchez, David
    Megias, David
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2017, 12 (06) : 1418 - 1429
  • [7] Trajectory Privacy Protection Method Based on Differential Privacy
    Yuan S.-L.
    Pi D.-C.
    Xu M.
    [J]. Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2021, 49 (07): : 1266 - 1273
  • [8] Trajectory privacy protection scheme based on differential privacy
    Chen, Si
    Fu, Anmin
    Su, Mang
    Sun, Huaijiang
    [J]. Tongxin Xuebao/Journal on Communications, 2021, 42 (09): : 54 - 64
  • [9] Tight Analysis of Privacy and Utility Tradeoff in Approximate Differential Privacy
    Geng, Quan
    Ding, Wei
    Guo, Ruiqi
    Kumar, Sanjiv
    [J]. INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 89 - 98
  • [10] Differential privacy: a privacy cloak for preserving utility in heterogeneous datasets
    Saurabh Gupta
    Arun Balaji Buduru
    Ponnurangam Kumaraguru
    [J]. CSI Transactions on ICT, 2022, 10 (1) : 25 - 36