A Multi-Layer Parallel Hardware Architecture for Homomorphic Computation in Machine Learning

被引:9
|
作者
Xin, Guozhu [1 ]
Zhao, Yifan [1 ]
Han, Jun [1 ]
机构
[1] Fudan Univ, State Key Lab ASIC & Syst, Shanghai 201203, Peoples R China
基金
中国国家自然科学基金;
关键词
Homomorphic encryption; machine learning; parallelism; hardware acceleration; FPGA; PROCESSOR;
D O I
10.1109/ISCAS51556.2021.9401623
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Homomorphic Encryption (HE) allows untrusted parties to process encrypted data without revealing its content. People could encrypt the data locally and send it to the cloud to conduct neural network training or inferencing, which achieves data privacy in AI. However, the combined AI and HE computation could be extremely slow. To deal with it, we propose a multi-level parallel hardware accelerator for homomorphic computations in machine learning. The vectorized Number Theoretic Transform (NTT) unit is designed to form the low-level parallelism, and we apply a Residue Number System (RNS) to form the mid-level parallelism in one polynomial. Finally, a fully pipelined and parallel accelerator for two ciphertext operands is proposed to form the high-level parallelism. To address the core computation (matrix-vector multiplication) in neural networks, our work is designed to support Multiply-Accumulate (MAC) operations natively between ciphertexts. We have analyzed our design on FPGA ZCU102, and experimental results show that it outperforms previous works and achieves over an order of magnitude acceleration than software implementations.
引用
收藏
页数:5
相关论文
共 50 条
  • [1] A Scalable Hardware Architecture for Multi-Layer Spiking Neural Networks
    Ying, Zhaozhong
    Luo, Chong
    Zhu, Xiaolei
    [J]. 2017 IEEE 12TH INTERNATIONAL CONFERENCE ON ASIC (ASICON), 2017, : 839 - 842
  • [2] Multi-layer LSTM Parallel Optimization Based on Hardware and Software Cooperation
    Chen, Qingfeng
    Wu, Jing
    Huang, Feihu
    Han, Yu
    Zhao, Qiming
    [J]. KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT II, 2022, 13369 : 681 - 693
  • [3] Parallel Multi-Layer Neural Network Architecture with Improved Efficiency
    Hunter, David
    Wilamowski, Bogdan
    [J]. 4TH INTERNATIONAL CONFERENCE ON HUMAN SYSTEM INTERACTION (HSI 2011), 2011, : 299 - 304
  • [4] Implementation of multi-layer neural network system for neuromorphic hardware architecture
    Sun, Wookyung
    Park, Junhee
    Jo, Sumin
    Lee, Jungwon
    Shin, Hyungsoon
    [J]. 2019 INTERNATIONAL CONFERENCE ON ELECTRONICS, INFORMATION, AND COMMUNICATION (ICEIC), 2019, : 312 - 313
  • [5] A Customized Hardware Architecture for Multi-layer Artificial Neural Networks on FPGA
    Huynh Minh Vu
    Huynh Viet Thang
    [J]. INFORMATION SYSTEMS DESIGN AND INTELLIGENT APPLICATIONS, INDIA 2017, 2018, 672 : 637 - 644
  • [6] A FAST LEARNING ALGORITHM FOR MULTI-LAYER EXTREME LEARNING MACHINE
    Tang, Jiexiong
    Deng, Chenwei
    Huang, Guang-Bin
    Hou, Junhui
    [J]. 2014 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2014, : 175 - 178
  • [7] Denoising Laplacian multi-layer extreme learning machine
    Zhang, Nan
    Ding, Shifei
    Shi, Zhongzhi
    [J]. NEUROCOMPUTING, 2016, 171 : 1066 - 1074
  • [8] Hardware architecture to realize multi-layer image processing in real-time
    Lu, Chieh-Lun
    Fu, Li-Chen
    [J]. IECON 2007: 33RD ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY, VOLS 1-3, CONFERENCE PROCEEDINGS, 2007, : 2478 - 2483
  • [9] A visual cortex-inspired edge neuromorphic hardware architecture with on-chip multi-layer STDP learning
    He, Junxian
    Tian, Min
    Jiang, Ying
    Wang, Haibing
    Wang, Tengxiao
    Zhou, Xichuan
    Liu, Liyuan
    Wu, Nanjian
    Wang, Ying
    Shi, Cong
    [J]. Computers and Electrical Engineering, 2024, 120
  • [10] A Low-Power Hardware Architecture for On-Line Supervised Learning in Multi-Layer Spiking Neural Networks
    Zheng, Nan
    Mazumder, Pinaki
    [J]. 2018 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2018,