Towards Deep Neural Network Training on Encrypted Data

被引:67
|
作者
Nandakumar, Karthik [1 ]
Ratha, Nalini [2 ]
Pankanti, Sharath [2 ]
Halevi, Shai [3 ]
机构
[1] IBM Res, Singapore, Singapore
[2] IBM Res, Yorktown Hts, NY 10598 USA
[3] Algorand Fdn, Boston, MA USA
关键词
FULLY HOMOMORPHIC ENCRYPTION;
D O I
10.1109/CVPRW.2019.00011
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
While deep learning is a valuable tool for solving many tough problems in computer vision., the success of deep learning models is typically determined by: (1) availability of sufficient training data, (ii) access to extensive computational resources, and (iii) expertise in selecting the right model and hyperparameters for the selected task. Often, the availability of data is the hard part due to compliance, legal, and privacy constraints. Cryptographic techniques such as fully homomorphic encryption, (THE) offer a potential solution by enabling processing on encrypted data. While prior work has been done on using FHE for inferencing, training a deep neural network in the encrypted domain is an extremely challenging task due to the computational complexity of the operations involved. In this paper, we evaluate the feasibility of training neural networks on encrypted data in a completely non-interactive way. Our proposed system uses the open-source FHE toolkit HElib to implement a Stochastic Gradient Descent (SGD)-based training of a neural network. We show that encrypted training can be made more computationally efficient by (i) simplifyinj_s, the network with minimal degradation of accuracy, choosing appropriate data representation and resolution, and (iii) packing the data elements within the ciphertext in a smart way so as to minimize the number of operations and facilitate parallelization of FHE computations. Based on the above optimizations, we demonstrate that it is possible to achieve more than 50x speed up while training a fully-connected neural network on the MNIST dataset while achieving reasonable accuracy (96%). Though the cost of training a complex deep learning model from scratch on encrypted data is still very high, this work establishes a solid baseline and paves the way for relatively simpler tasks such as fine-tuning of deep learning models based on encrypted data to be implemented in the near future.
引用
收藏
页码:40 / 48
页数:9
相关论文
共 50 条
  • [11] Applying neural network approach to homomorphic encrypted data
    Ghimes, Ana-Maria
    Vladuta, Valentin-Alexandru
    Patriciu, Victor-Valeriu
    Ionita, Adrian
    [J]. PROCEEDINGS OF THE 2018 10TH INTERNATIONAL CONFERENCE ON ELECTRONICS, COMPUTERS AND ARTIFICIAL INTELLIGENCE (ECAI), 2018,
  • [12] Transmitting encrypted data by wavelet transform and neural network
    Ashtiyani, Meghdad
    Behbahani, Soroor
    Asadi, Saeed
    Birgani, Parmida Moradi
    [J]. 2007 IEEE INTERNATIONAL SYMPOSIUM ON SIGNAL PROCESSING AND INFORMATION TECHNOLOGY, VOLS 1-3, 2007, : 251 - 255
  • [13] slytHErin: An Agile Framework for Encrypted Deep Neural Network Inference
    Intoci, Francesco
    Sav, Sinem
    Pyrgelis, Apostolos
    Bossuat, Jean-Philippe
    Troncoso-Pastoriza, Juan Ramon
    Hubaux, Jean-Pierre
    [J]. APPLIED CRYPTOGRAPHY AND NETWORK SECURITY WORKSHOPS, ACNS 2023 SATELLITE WORKSHOPS, ADSC 2023, AIBLOCK 2023, AIHWS 2023, AIOTS 2023, CIMSS 2023, CLOUD S&P 2023, SCI 2023, SECMT 2023, SIMLA 2023, 2023, 13907 : 359 - 377
  • [14] SEALing Neural Network Models in Encrypted Deep Learning Accelerators
    Zuo, Pengfei
    Hua, Yu
    Liang, Ling
    Xie, Xinfeng
    Hu, Xing
    Xie, Yuan
    [J]. 2021 58TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2021, : 1255 - 1260
  • [15] Generative deep deconvolutional neural network for increasing and diversifying training data
    Qin, Runnan
    Wang, Rui
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON IMAGING SYSTEMS AND TECHNIQUES (IST), 2018, : 102 - 107
  • [16] Augmented reality data generation for training deep learning neural network
    Payumo, Kevin
    Huyen, Alexander
    Seouin, Landan
    Lu, Thomas
    Chow, Edward
    Torres, Gil
    [J]. PATTERN RECOGNITION AND TRACKING XXIX, 2018, 10649
  • [17] Deep Neural Network Image Fusion Without Using Training Data
    Zhu, L.
    Baturin, P.
    [J]. MEDICAL PHYSICS, 2019, 46 (06) : E382 - E382
  • [18] Benchmarking network fabrics for data distributed training of deep neural networks
    Samsi, Siddharth
    Prout, Andrew
    Jones, Michael
    Kirby, Andrew
    Arcand, Bill
    Bergeron, Bill
    Bestor, David
    Byun, Chansup
    Gadepally, Vijay
    Houle, Michael
    Hubbell, Matthew
    Klein, Anna
    Michaleas, Peter
    Milechin, Lauren
    Mullen, Julie
    Rosa, Antonio
    Yee, Charles
    Reuther, Albert
    Kepner, Jeremy
    [J]. 2020 IEEE HIGH PERFORMANCE EXTREME COMPUTING CONFERENCE (HPEC), 2020,
  • [19] Towards the robustness in neural network training
    Manic, M
    Wilamowski, B
    [J]. IECON-2002: PROCEEDINGS OF THE 2002 28TH ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, VOLS 1-4, 2002, : 1768 - 1771
  • [20] Visualization in Deep Neural Network Training
    Kollias, Stefanos
    [J]. INTERNATIONAL JOURNAL ON ARTIFICIAL INTELLIGENCE TOOLS, 2022, 31 (03)