Gradient Sparsification Can Improve Performance of Differentially-Private Convex Machine Learning

被引:2
|
作者
Farokhi, Farhad [1 ]
机构
[1] Univ Melbourne, Dept Elect & Elect Engn, Melbourne, Vic, Australia
关键词
D O I
10.1109/CDC45484.2021.9683246
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We use gradient sparsification to reduce the adverse effect of differential privacy noise on performance of private machine learning models. To this aim, we employ compressed sensing and additive Laplace noise to evaluate differentially-private gradients. Noisy privacy-preserving gradients are used to perform stochastic gradient descent for training machine learning models. Sparsification, achieved by setting the smallest gradient entries to zero, can reduce the convergence speed of the training algorithm. However, by sparsification and compressed sensing, the dimension of communicated gradient and the magnitude of additive noise can be reduced. The interplay between these effects determines whether gradient sparsification improves the performance of differentially-private machine learning models. We investigate this analytically in the paper. We prove that, for small privacy budgets, compression can improve performance of privacy-preserving machine learning models. However, for large privacy budgets, compression does not necessarily improve the performance. Intuitively, this is because the effect of privacy-preserving noise is minimal in large privacy budget regime and thus improvements from gradient sparsification cannot compensate for its slower convergence.
引用
收藏
页码:1695 / 1700
页数:6
相关论文
共 50 条
  • [1] The Cost of Privacy in Asynchronous Differentially-Private Machine Learning
    Farokhi, Farhad
    Wu, Nan
    Smith, David
    Kaafar, Mohamed Ali
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 (16) : 2118 - 2129
  • [2] Distributed differentially-private learning with communication efficiency
    Phuong, Tran Thi
    Phong, Le Trieu
    [J]. JOURNAL OF SYSTEMS ARCHITECTURE, 2022, 128
  • [3] Distributionally-robust machine learning using locally differentially-private data
    Farhad Farokhi
    [J]. Optimization Letters, 2022, 16 : 1167 - 1179
  • [4] Differentially-Private Learning of Low Dimensional Manifolds
    Choromanska, Anna
    Choromanski, Krzysztof
    Jagannathan, Geetha
    Monteleoni, Claire
    [J]. ALGORITHMIC LEARNING THEORY (ALT 2013), 2013, 8139 : 249 - 263
  • [5] Matrix Gaussian Mechanisms for Differentially-Private Learning
    Yang, Jungang
    Xiang, Liyao
    Yu, Jiahao
    Wang, Xinbing
    Guo, Bin
    Li, Zhetao
    Li, Baochun
    [J]. IEEE TRANSACTIONS ON MOBILE COMPUTING, 2023, 22 (02) : 1036 - 1048
  • [6] Distributionally-robust machine learning using locally differentially-private data
    Farokhi, Farhad
    [J]. OPTIMIZATION LETTERS, 2022, 16 (04) : 1167 - 1179
  • [7] Differentially-Private Deep Learning With Directional Noise
    Xiang, Liyao
    Li, Weiting
    Yang, Jungang
    Wang, Xinbing
    Li, Baochun
    [J]. IEEE TRANSACTIONS ON MOBILE COMPUTING, 2023, 22 (05) : 2599 - 2612
  • [8] Differentially-private learning of low dimensional manifolds
    Choromanska, Anna
    Choromanski, Krzysztof
    Jagannathan, Geetha
    Monteleoni, Claire
    [J]. THEORETICAL COMPUTER SCIENCE, 2016, 620 : 91 - 104
  • [9] Straggler-Resilient Differentially-Private Decentralized Learning
    Yakimenka, Yauhen
    Weng, Chung-Wei
    Lin, Hsuan-Yin
    Rosnes, Eirik
    Kliewer, Jorg
    [J]. 2022 IEEE INFORMATION THEORY WORKSHOP (ITW), 2022, : 708 - 713
  • [10] Differentially-Private Deep Learning from an Optimization Perspective
    Xiang, Liyao
    Yang, Jingbo
    Li, Baochun
    [J]. IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2019), 2019, : 559 - 567