Differential Privacy Preservation in Interpretable Feedforward-Designed Convolutional Neural Networks

被引:9
|
作者
Li, De
Wang, Jinyan [1 ]
Tan, Zhou
Li, Xianxian
Hu, Yuhang
机构
[1] Guangxi Normal Univ, Guangxi Key Lab Multisource Informat Min & Secur, Guilin, Peoples R China
基金
中国国家自然科学基金;
关键词
Differential privacy; Interpretable CNN; Privacy risks; Deep learning;
D O I
10.1109/TrustCom50675.2020.00089
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Feedforward-designed convolutional neural network (FF-CNN) is an interpretable network. The parameter training of the model does not require backpropagation (BP) and optimization algorithms (SGD). The entire network is based on the statistical data output by the previous layer, and the parameters of the current layer are obtained through one-pass manner. Since the network complexity under the FF design is lower than the BP algorithm, FF-CNN has better utility than the BP training method in the directions of semi-supervised learning, ensemble learning, and continuous subspace learning. However, the FF-CNN training process or model release will cause the privacy of training data to be leaked. In this paper, we analyze and verify that the attacker can obtain the private information of the original training data after mastering the training parameters of FF-CNN and the partial output responses. Therefore, the privacy protection of training data is imperative. However, due to the particularity of the FF-CNN training method, the existing deep learning privacy protection technology is not applicable. So we proposed an algorithm called differential privacy subspace approximation with adjusted bias (DPSaab) to protect the training data in FF-CNN. According to the different contribution of the model filters to the output response, we design the privacy budget allocation according to the ratio of the eigenvalues and allocate a larger privacy budget to the filter with a large contribution, and vice versa. Extensive experiments on MNIST, Fashion-MNIST, and CIFAR-10 datasets show that DPSaab algorithm has better utility than existing privacy protection technologies.
引用
收藏
页码:631 / 638
页数:8
相关论文
共 50 条
  • [41] Staircase based differential privacy with branching mechanism for location privacy preservation in wireless sensor networks
    Chakraborty, Bodhi
    Verma, Shekhar
    Singh, Krishna Pratap
    [J]. COMPUTERS & SECURITY, 2018, 77 : 36 - 48
  • [42] Computational characteristics of feedforward neural networks for solving a stiff differential equation
    Toni Schneidereit
    Michael Breuß
    [J]. Neural Computing and Applications, 2022, 34 : 7975 - 7989
  • [43] SOLUTION OF NONLINEAR ORDINARY DIFFERENTIAL-EQUATIONS BY FEEDFORWARD NEURAL NETWORKS
    MEADE, AJ
    FERNANDEZ, AA
    [J]. MATHEMATICAL AND COMPUTER MODELLING, 1994, 20 (09) : 19 - 44
  • [44] Computational characteristics of feedforward neural networks for solving a stiff differential equation
    Schneidereit, Toni
    Breuss, Michael
    [J]. NEURAL COMPUTING & APPLICATIONS, 2022, 34 (10): : 7975 - 7989
  • [45] ε-k anonymization and adversarial training of graph neural networks for privacy preservation in social networks
    Tian, Hu
    Zheng, Xiaolong
    Zhang, Xingwei
    Zeng, Daniel Dajun
    [J]. ELECTRONIC COMMERCE RESEARCH AND APPLICATIONS, 2021, 50
  • [46] Utilizing convolutional neural networks for the classification and preservation of Kalinga textile patterns
    Campos, Hancy D.
    Caya, Meo Vincent
    [J]. INTERNATIONAL JOURNAL OF ADVANCED AND APPLIED SCIENCES, 2024, 11 (06): : 229 - 236
  • [47] EEG Motor Execution Decoding via Interpretable Sinc-Convolutional Neural Networks
    Borra, Davide
    Fantozzi, Silvia
    Magosso, Elisa
    [J]. XV MEDITERRANEAN CONFERENCE ON MEDICAL AND BIOLOGICAL ENGINEERING AND COMPUTING - MEDICON 2019, 2020, 76 : 1113 - 1122
  • [48] Abs-CAM: a gradient optimization interpretable approach for explanation of convolutional neural networks
    Zeng, Chunyan
    Yan, Kang
    Wang, Zhifeng
    Yu, Yan
    Xia, Shiyan
    Zhao, Nan
    [J]. SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (04) : 1069 - 1076
  • [49] Interpretable Machine Learning: Convolutional Neural Networks with RBF Fuzzy Logic Classification Rules
    Xi, Zhen
    Panoutsos, George
    [J]. 2018 9TH INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS (IS), 2018, : 448 - 454
  • [50] PathCNN: interpretable convolutional neural networks for survival prediction and pathway analysis applied to glioblastoma
    Oh, Jung Hun
    Choi, Wookjin
    Ko, Euiseong
    Kang, Mingon
    Tannenbaum, Allen
    Deasy, Joseph O.
    [J]. BIOINFORMATICS, 2021, 37 : I443 - I450