Differential Privacy Preservation in Interpretable Feedforward-Designed Convolutional Neural Networks

被引:9
|
作者
Li, De
Wang, Jinyan [1 ]
Tan, Zhou
Li, Xianxian
Hu, Yuhang
机构
[1] Guangxi Normal Univ, Guangxi Key Lab Multisource Informat Min & Secur, Guilin, Peoples R China
基金
中国国家自然科学基金;
关键词
Differential privacy; Interpretable CNN; Privacy risks; Deep learning;
D O I
10.1109/TrustCom50675.2020.00089
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Feedforward-designed convolutional neural network (FF-CNN) is an interpretable network. The parameter training of the model does not require backpropagation (BP) and optimization algorithms (SGD). The entire network is based on the statistical data output by the previous layer, and the parameters of the current layer are obtained through one-pass manner. Since the network complexity under the FF design is lower than the BP algorithm, FF-CNN has better utility than the BP training method in the directions of semi-supervised learning, ensemble learning, and continuous subspace learning. However, the FF-CNN training process or model release will cause the privacy of training data to be leaked. In this paper, we analyze and verify that the attacker can obtain the private information of the original training data after mastering the training parameters of FF-CNN and the partial output responses. Therefore, the privacy protection of training data is imperative. However, due to the particularity of the FF-CNN training method, the existing deep learning privacy protection technology is not applicable. So we proposed an algorithm called differential privacy subspace approximation with adjusted bias (DPSaab) to protect the training data in FF-CNN. According to the different contribution of the model filters to the output response, we design the privacy budget allocation according to the ratio of the eigenvalues and allocate a larger privacy budget to the filter with a large contribution, and vice versa. Extensive experiments on MNIST, Fashion-MNIST, and CIFAR-10 datasets show that DPSaab algorithm has better utility than existing privacy protection technologies.
引用
收藏
页码:631 / 638
页数:8
相关论文
共 50 条
  • [1] A privacy preservation framework for feedforward-designed convolutional neural networks
    Li, De
    Wang, Jinyan
    Li, Qiyu
    Hu, Yuhang
    Li, Xianxian
    [J]. NEURAL NETWORKS, 2022, 155 : 14 - 27
  • [2] ENSEMBLES OF FEEDFORWARD-DESIGNED CONVOLUTIONAL NEURAL NETWORKS
    Chen, Yueru
    Yang, Yijing
    Wang, Wei
    Kuo, C. -C. Jay
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 3796 - 3800
  • [3] SEMI-SUPERVISED LEARNING VIA FEEDFORWARD-DESIGNED CONVOLUTIONAL NEURAL NETWORKS
    Chen, Yueru
    Yang, Yijing
    Zhang, Min
    Kuo, C. -C. Jay
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 365 - 369
  • [4] Interpretable convolutional neural networks via feedforward design
    Kuo, C-C. Jay
    Zhang, Min
    Li, Siyang
    Duan, Jiali
    Chen, Yueru
    [J]. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2019, 60 : 346 - 359
  • [5] Interpretable Convolutional Neural Networks
    Zhang, Quanshi
    Wu, Ying Nian
    Zhu, Song-Chun
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 8827 - 8836
  • [6] Interpretable Compositional Convolutional Neural Networks
    Shen, Wen
    Wei, Zhihua
    Huang, Shikun
    Zhang, Binbin
    Fan, Jiaqi
    Zhao, Ping
    Zhang, Quanshi
    [J]. PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 2971 - 2978
  • [7] Scalable and Efficient Training of Large Convolutional Neural Networks with Differential Privacy
    Bu, Zhiqi
    Mao, Jialin
    Xu, Shiyun
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [8] E pluribus unum interpretable convolutional neural networks
    George Dimas
    Eirini Cholopoulou
    Dimitris K. Iakovidis
    [J]. Scientific Reports, 13
  • [9] E pluribus unum interpretable convolutional neural networks
    Dimas, George
    Cholopoulou, Eirini
    Iakovidis, Dimitris K.
    [J]. SCIENTIFIC REPORTS, 2023, 13 (01)
  • [10] Interpretable and Accurate Convolutional Neural Networks for Human Activity Recognition
    Kim, Eunji
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2020, 16 (11) : 7190 - 7198