Differential privacy in deep learning: Privacy and beyond

被引:6
|
作者
Wang, Yanling [1 ,2 ,3 ]
Wang, Qian [2 ]
Zhao, Lingchen [2 ]
Wang, Cong [3 ]
机构
[1] Minist Educ, Key Lab Aerosp Informat Secur & Trusted Comp, 299 Bayi Rd, Wuhan 430072, Hubei, Peoples R China
[2] Wuhan Univ, Sch Cyber Sci & Engn, 299 Bayi Rd, Wuhan 430072, Hubei, Peoples R China
[3] City Univ Hong Kong, Dept Comp Sci, Kowloon, 83 Tat Chee Ave, Hong Kong 999077, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep learning; Differential privacy; Stochastic gradient descent; Lower bound; Fairness; Robustness; EDGE; INFERENCE; ATTACKS;
D O I
10.1016/j.future.2023.06.010
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Motivated by the security risks of deep neural networks, such as various membership and attribute inference attacks, differential privacy has emerged as a promising approach for protecting the privacy of neural networks. As a result, it is crucial to investigate the frontier intersection of differential privacy and deep learning, which is the main motivation behind this survey. Most of the current research in this field focuses on developing mechanisms for combining differentially private perturbations with deep learning frameworks. We provide a detailed summary of these works and analyze potential areas for improvement in the near future. In addition to privacy protection, differential privacy can also play other critical roles in deep learning, such as fairness, robustness, and prevention of over-fitting, which have not been thoroughly explored in previous research. Accordingly, we also discuss future research directions in these areas to offer practical suggestions for future studies. (c) 2023 Elsevier B.V. All rights reserved.
引用
收藏
页码:408 / 424
页数:17
相关论文
共 50 条
  • [1] Deep Learning with Differential Privacy
    Abadi, Martin
    Chu, Andy
    Goodfellow, Ian
    McMahan, H. Brendan
    Mironov, Ilya
    Talwar, Kunal
    Zhang, Li
    CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, : 308 - 318
  • [2] Deep Learning with Label Differential Privacy
    Ghazi, Badih
    Golowich, Noah
    Kumar, Ravi
    Manurangsi, Pasin
    Zhang, Chiyuan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [3] Local Differential Privacy for Deep Learning
    Arachchige, Pathum Chamikara Mahawaga
    Bertok, Peter
    Khalil, Ibrahim
    Liu, Dongxi
    Camtepe, Seyit
    Atiquzzaman, Mohammed
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (07): : 5827 - 5842
  • [4] Optimal Balance of Privacy and Utility with Differential Privacy Deep Learning Frameworks
    Kotevska, Olivera
    Alamudun, Folami
    Stanley, Christopher
    2021 INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND COMPUTATIONAL INTELLIGENCE (CSCI 2021), 2021, : 425 - 430
  • [5] When Deep Learning Meets Differential Privacy: Privacy,Security, and More
    Li, Xinyan
    Chen, Yufei
    Wang, Cong
    Shen, Chao
    IEEE NETWORK, 2021, 35 (06): : 148 - 155
  • [6] Medical imaging deep learning with differential privacy
    Ziller, Alexander
    Usynin, Dmitrii
    Braren, Rickmer
    Makowski, Marcus
    Rueckert, Daniel
    Kaissis, Georgios
    SCIENTIFIC REPORTS, 2021, 11 (01)
  • [7] Towards Decentralized Deep Learning with Differential Privacy
    Cheng, Hsin-Pai
    Yu, Patrick
    Hu, Haojing
    Zawad, Syed
    Yan, Feng
    Li, Shiyu
    Li, Hai
    Chen, Yiran
    CLOUD COMPUTING - CLOUD 2019, 2019, 11513 : 130 - 145
  • [8] Differential Privacy Preserving Deep Learning in Healthcare
    Wu, Xintao
    2017 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM), 2017, : 8 - 8
  • [9] Differential privacy in deep learning: A literature survey
    Pan, Ke
    Ong, Yew-Soon
    Gong, Maoguo
    Li, Hui
    Qin, A. K.
    Gao, Yuan
    NEUROCOMPUTING, 2024, 589
  • [10] Differential Privacy for Deep and Federated Learning: A Survey
    El Ouadrhiri, Ahmed
    Abdelhadi, Ahmed
    IEEE ACCESS, 2022, 10 : 22359 - 22380