Enabling Secure NVM-Based in-Memory Neural Network Computing by Sparse Fast Gradient Encryption

被引:16
|
作者
Cai, Yi [1 ]
Chen, Xiaoming [2 ]
Tian, Lu [3 ]
Wang, Yu [1 ]
Yang, Huazhong [1 ]
机构
[1] Tsinghua Univ, Beijing Natl Res Ctr Informat Sci & Technol BNRis, Dept Elect Engn, Beijing 100084, Peoples R China
[2] Chinese Acad Sci, Inst Comp Technol, State Key Lab Comp Architecture, Beijing 100864, Peoples R China
[3] Xilinx Inc, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Artificial neural networks; Nonvolatile memory; Encryption; Computational modeling; Hardware; Non-volatile memory (NVM); compute-in-memory (CIM); neural network; security; encryption; ATTACKS;
D O I
10.1109/TC.2020.3017870
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Neural network (NN) computing is energy-consuming on traditional computing systems, owing to the inherent memory wall bottleneck of the von Neumann architecture and the Moore's Law being approaching the end. Non-volatile memories (NVMs) have been demonstrated as promising alternatives for constructing computing-in-memory (CIM) systems to accelerate NN computing. However, NVM-based NN computing systems are vulnerable to the confidentiality attacks because the weight parameters persist in memory when the system is powered off, enabling an adversary with physical access to extract the well-trained NN models. The goal of this article is to find a solution for thwarting the confidentiality attacks. We define and model the weight encryption problem. Then we propose an effective framework, containing a sparse fast gradient encryption (SFGE) method and a runtime encryption scheduling (RES) scheme, to guarantee the confidentiality security of NN models with a negligible performance overhead. Moreover, we improve the SFGE method by incrementally generating the encryption keys. Additionally, we provide variants of the encryption method to better fit quantized models and various mapping strategies. The experiments demonstrate that only encrypting an extremely small proportion of the weights (e.g., 20 weights per layer in ResNet-101), the NN models can be strictly protected.
引用
收藏
页码:1596 / 1610
页数:15
相关论文
共 50 条
  • [21] Fast and robust analog in-memory deep neural network training
    Rasch, Malte J.
    Carta, Fabio
    Fagbohungbe, Omobayode
    Gokmen, Tayfun
    NATURE COMMUNICATIONS, 2024, 15 (01)
  • [22] In-Memory Computing Architecture for a Convolutional Neural Network Based on Spin Orbit Torque MRAM
    Huang, Jun-Ying
    Syu, Jing-Lin
    Tsou, Yao-Tung
    Kuo, Sy-Yen
    Chang, Ching-Ray
    ELECTRONICS, 2022, 11 (08)
  • [23] T-EAP: Trainable Energy-Aware Pruning for NVM-based Computing-in-Memory Architecture
    Chang, Cheng-Yang
    Chuang, Yu-Chuan
    Chou, Kuang-Chao
    Wu, An-Yeu
    2022 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS 2022): INTELLIGENT TECHNOLOGY IN THE POST-PANDEMIC ERA, 2022, : 78 - 81
  • [24] Achieving Lossless Accuracy with Lossy Programming for Efficient Neural-Network Training on NVM-Based Systems
    Wang, Wei-Chen
    Chang, Yuan-Hao
    Kuo, Tei-Wei
    Ho, Chien-Chung
    Chang, Yu-Ming
    Chang, Hung-Sheng
    ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2019, 18 (05)
  • [25] Approximate Programming Design for Enhancing Energy, Endurance and Performance of Neural Network Training on NVM-based Systems
    Chien-Chung Ho
    Wei-Chen Wang
    Te-Hao Hsu
    Zhi-Duan Jiang
    Yung-Chun Li
    10TH IEEE NON-VOLATILE MEMORY SYSTEMS AND APPLICATIONS SYMPOSIUM (NVMSA 2021), 2021,
  • [26] A Compressed Spiking Neural Network Onto a Memcapacitive In-Memory Computing Array
    Oshio, Reon
    Sugahara, Takuya
    Sawada, Atsushi
    Kimura, Mutsumi
    Zhang, Renyuan
    Nakashima, Yasuhiko
    IEEE MICRO, 2024, 44 (01) : 8 - 16
  • [27] A Parallel Randomized Neural Network on In-memory Cluster Computing for Big Data
    Dai, Tongwu
    Li, Kenli
    Chen, Cen
    2017 13TH INTERNATIONAL CONFERENCE ON NATURAL COMPUTATION, FUZZY SYSTEMS AND KNOWLEDGE DISCOVERY (ICNC-FSKD), 2017,
  • [28] Swift: Fast Secure Neural Network Inference With Fully Homomorphic Encryption
    Fu, Yu
    Tong, Yu
    Ning, Yijing
    Xu, Tianshi
    Li, Meng
    Lin, Jingqiang
    Feng, Dengguo
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 2793 - 2806
  • [29] MOL-Based In-Memory Computing of Binary Neural Networks
    Ali, Khaled Alhaj
    Baghdadi, Amer
    Dupraz, Elsa
    Leonardon, Mathieu
    Rizk, Mostafa
    Diguet, Jean-Philippe
    IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2022, 30 (07) : 869 - 880
  • [30] Hadamard product-based in-memory computing design for floating point neural network training
    Fan, Anjunyi
    Fu, Yihan
    Tao, Yaoyu
    Jin, Zhonghua
    Han, Haiyue
    Liu, Huiyu
    Zhang, Yaojun
    Yan, Bonan
    Yang, Yuchao
    Huang, Ru
    NEUROMORPHIC COMPUTING AND ENGINEERING, 2023, 3 (01):