A privacy preservation framework for feedforward-designed convolutional neural networks

被引:7
|
作者
Li, De [1 ,2 ]
Wang, Jinyan [1 ,2 ]
Li, Qiyu [2 ]
Hu, Yuhang [2 ]
Li, Xianxian [1 ,2 ]
机构
[1] Guangxi Normal Univ, Guangxi Key Lab Multisource Informat Min & Secur, Guilin, Peoples R China
[2] Guangxi Normal Univ, Sch Comp Sci & Engn, Guilin, Peoples R China
基金
中国国家自然科学基金;
关键词
Differential privacy; Convolutional neural networks; Feedforward-designed; Feature selection; Over-fitting; MODEL;
D O I
10.1016/j.neunet.2022.08.005
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A feedforward-designed convolutional neural network (FF-CNN) is an interpretable neural network with low training complexity. Unlike a neural network trained using backpropagation (BP) algorithms and optimizers (e.g., stochastic gradient descent (SGD) and Adam), a FF-CNN obtains the model parameters in one feed-forward calculation based on two methods of data statistics: subspace approximation with adjusted bias and least squares regression. Currently, models based on FF-CNN training methods have achieved outstanding performance in the fields of image classification and point cloud data processing. In this study, we analyze and verify that there is a risk of user privacy leakage during the training process of FF-CNN and existing privacy-preserving methods for model gradients or loss functions do not apply to FF-CNN models. Therefore, we propose a securely forward-designed convolutional neural network algorithm (SFF-CNN) to protect the privacy and security of data providers for the FF-CNN model. Firstly, we propose the DPSaab algorithm to add the corresponding noise to the one-stage Saab transform in the FF-CNN design for improved protection performance. Secondly, because noise addition brings the risk of model over-fitting and further increases the possibility of privacy leakage, we propose the SJS algorithm to filter the input features of the fully connected model layer. Finally, we theoretically prove that the proposed algorithm satisfies differential privacy and experimentally demonstrate that the proposed algorithm has strong privacy protection. The proposed algorithm outperforms the compared deep learning privacy-preserving algorithms in terms of utility and robustness. (C) 2022 Published by Elsevier Ltd.
引用
收藏
页码:14 / 27
页数:14
相关论文
共 50 条
  • [1] Differential Privacy Preservation in Interpretable Feedforward-Designed Convolutional Neural Networks
    Li, De
    Wang, Jinyan
    Tan, Zhou
    Li, Xianxian
    Hu, Yuhang
    [J]. 2020 IEEE 19TH INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS (TRUSTCOM 2020), 2020, : 631 - 638
  • [2] ENSEMBLES OF FEEDFORWARD-DESIGNED CONVOLUTIONAL NEURAL NETWORKS
    Chen, Yueru
    Yang, Yijing
    Wang, Wei
    Kuo, C. -C. Jay
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 3796 - 3800
  • [3] SEMI-SUPERVISED LEARNING VIA FEEDFORWARD-DESIGNED CONVOLUTIONAL NEURAL NETWORKS
    Chen, Yueru
    Yang, Yijing
    Zhang, Min
    Kuo, C. -C. Jay
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 365 - 369
  • [4] Interpretable convolutional neural networks via feedforward design
    Kuo, C-C. Jay
    Zhang, Min
    Li, Siyang
    Duan, Jiali
    Chen, Yueru
    [J]. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2019, 60 : 346 - 359
  • [5] Social Networks Privacy Preservation: A Novel Framework
    Singh, Amardeep
    Singh, Monika
    [J]. CYBERNETICS AND SYSTEMS, 2022,
  • [6] A fusing framework of shortcut convolutional neural networks
    Zhang, Ting
    Waqas, Muhammad
    Liu, Zhaoying
    Tu, Shanshan
    Halim, Zahid
    Rehman, Sadaqat Ur
    Li, Yujian
    Han, Zhu
    [J]. INFORMATION SCIENCES, 2021, 579 : 685 - 699
  • [7] Fast Computing Framework for Convolutional Neural Networks
    Korytkowski, Marcin
    Staszewski, Pawel
    Woldan, Piotr
    Scherer, Rafal
    [J]. PROCEEDINGS OF 2016 IEEE INTERNATIONAL CONFERENCES ON BIG DATA AND CLOUD COMPUTING (BDCLOUD 2016) SOCIAL COMPUTING AND NETWORKING (SOCIALCOM 2016) SUSTAINABLE COMPUTING AND COMMUNICATIONS (SUSTAINCOM 2016) (BDCLOUD-SOCIALCOM-SUSTAINCOM 2016), 2016, : 118 - 123
  • [8] Fault tolerance of feedforward artificial neural networks - A framework of study
    Chandra, P
    Singh, Y
    [J]. PROCEEDINGS OF THE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS 2003, VOLS 1-4, 2003, : 489 - 494
  • [9] DeepGlobal: A framework for global robustness verification of feedforward neural networks
    Sun, Weidi
    Lu, Yuteng
    Zhang, Xiyue
    Sun, Meng
    [J]. JOURNAL OF SYSTEMS ARCHITECTURE, 2022, 128
  • [10] Privacy Partition: A Privacy-preserving Framework for Deep Neural Networks in Edge Networks
    Chi, Jianfeng
    Owusu, Emmanuel
    Yin, Xuwang
    Yu, Tong
    Chan, William
    Liu, Yiming
    Liu, Haodong
    Chen, Jiasen
    Sim, Swee
    Iyengar, Vibha
    Tague, Patrick
    Tian, Yuan
    [J]. 2018 THIRD IEEE/ACM SYMPOSIUM ON EDGE COMPUTING (SEC), 2018, : 378 - 380