On the Utility and Protection of Optimization with Differential Privacy and Classic Regularization Techniques

被引:0
|
作者
Lomurno, Eugenio [1 ]
Matteucci, Matteo [1 ]
机构
[1] Politecn Milan, Milan, Italy
基金
欧盟地平线“2020”;
关键词
Differential privacy; Regularization; Membership inference; Model inversion;
D O I
10.1007/978-3-031-25599-1_17
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Nowadays, owners and developers of deep learning models must consider stringent privacy-preservation rules of their training data, usually crowd-sourced and retaining sensitive information. The most widely adopted method to enforce privacy guarantees of a deep learning model nowadays relies on optimization techniques enforcing differential privacy. According to the literature, this approach has proven to be a successful defence against several models' privacy attacks, but its downside is a substantial degradation of the models' performance. In this work, we compare the effectiveness of the differentially-private stochastic gradient descent (DP-SGD) algorithm against standard optimization practices with regularization techniques. We analyze the resulting models' utility, training performance, and the effectiveness of membership inference and model inversion attacks against the learned models. Finally, we discuss differential privacy's flaws and limits and empirically demonstrate the often superior privacy-preserving properties of dropout and l2-regularization.
引用
收藏
页码:223 / 238
页数:16
相关论文
共 50 条
  • [31] Differential Privacy Trajectory Data Protection Scheme
    Song C.
    Xu B.
    He J.
    [J]. Beijing Youdian Daxue Xuebao/Journal of Beijing University of Posts and Telecommunications, 2022, 45 (01): : 13 - 18
  • [32] Protection of user data by differential privacy algorithms
    School of Automobile and Transportation, Chengdu Technological University, No. 1, The second section of Zhongxin Avenue, Pidu Dist., Chengdu, Sichuan
    611730, China
    不详
    611730, China
    [J]. Qin, Feilong, 1600, Femto Technique Co., Ltd. (22): : 838 - 844
  • [33] DPRF: A Differential Privacy Protection Random Forest
    Hou, Jun
    Li, Qianmu
    Meng, Shunmei
    Ni, Zhen
    Chen, Yini
    Liu, Yaozong
    [J]. IEEE ACCESS, 2019, 7 : 130707 - 130720
  • [34] Natural differential privacy—a perspective on protection guarantees
    Altman M.
    Cohen A.
    [J]. PeerJ Computer Science, 2023, 9
  • [35] Local Differential Privacy for Bayesian Optimization
    Zhou, Xingyu
    Tan, Jian
    [J]. THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 11152 - 11159
  • [36] Support vector machine with manifold regularization and partially labeling privacy protection
    Ni, Tongguang
    Chung, Fu-Lai
    Wang, Shitong
    [J]. INFORMATION SCIENCES, 2015, 294 : 390 - 407
  • [37] Homogeneous network publishing privacy protection based on differential privacy uncertainty
    Qu, Lianwei
    Yang, Jing
    Wang, Yong
    [J]. INFORMATION SCIENCES, 2023, 636
  • [38] k Anonymous Trajectory Privacy Protection Scheme of Personalized Differential Privacy
    Song C.
    Cheng D.
    Ni S.
    [J]. Beijing Youdian Daxue Xuebao/Journal of Beijing University of Posts and Telecommunications, 2023, 46 (03): : 109 - 114
  • [39] A location privacy protection algorithm based on differential privacy in sensor network
    Kou, Kaiqiang
    Liu, Zhaobin
    Ye, Hong
    Li, Zhiyang
    Liu, Weijiang
    [J]. INTERNATIONAL JOURNAL OF EMBEDDED SYSTEMS, 2021, 14 (05) : 432 - 442
  • [40] Grid Anonymous Trajectory Privacy Protection Algorithm Based on Differential Privacy
    Dai, Hong
    Wu, Zijian
    Wang, Shuang
    Wu, Ke
    [J]. IAENG International Journal of Applied Mathematics, 2023, 53 (03)