Enabling Secure NVM-Based in-Memory Neural Network Computing by Sparse Fast Gradient Encryption

被引:16
|
作者
Cai, Yi [1 ]
Chen, Xiaoming [2 ]
Tian, Lu [3 ]
Wang, Yu [1 ]
Yang, Huazhong [1 ]
机构
[1] Tsinghua Univ, Beijing Natl Res Ctr Informat Sci & Technol BNRis, Dept Elect Engn, Beijing 100084, Peoples R China
[2] Chinese Acad Sci, Inst Comp Technol, State Key Lab Comp Architecture, Beijing 100864, Peoples R China
[3] Xilinx Inc, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Artificial neural networks; Nonvolatile memory; Encryption; Computational modeling; Hardware; Non-volatile memory (NVM); compute-in-memory (CIM); neural network; security; encryption; ATTACKS;
D O I
10.1109/TC.2020.3017870
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Neural network (NN) computing is energy-consuming on traditional computing systems, owing to the inherent memory wall bottleneck of the von Neumann architecture and the Moore's Law being approaching the end. Non-volatile memories (NVMs) have been demonstrated as promising alternatives for constructing computing-in-memory (CIM) systems to accelerate NN computing. However, NVM-based NN computing systems are vulnerable to the confidentiality attacks because the weight parameters persist in memory when the system is powered off, enabling an adversary with physical access to extract the well-trained NN models. The goal of this article is to find a solution for thwarting the confidentiality attacks. We define and model the weight encryption problem. Then we propose an effective framework, containing a sparse fast gradient encryption (SFGE) method and a runtime encryption scheduling (RES) scheme, to guarantee the confidentiality security of NN models with a negligible performance overhead. Moreover, we improve the SFGE method by incrementally generating the encryption keys. Additionally, we provide variants of the encryption method to better fit quantized models and various mapping strategies. The experiments demonstrate that only encrypting an extremely small proportion of the weights (e.g., 20 weights per layer in ResNet-101), the NN models can be strictly protected.
引用
收藏
页码:1596 / 1610
页数:15
相关论文
共 50 条
  • [41] RRAM-Based In-Memory Computing for Embedded Deep Neural Networks
    Bankman, D.
    Messner, J.
    Gural, A.
    Murmann, B.
    CONFERENCE RECORD OF THE 2019 FIFTY-THIRD ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, 2019, : 1511 - 1515
  • [42] In-Memory Computing Based Hardware Accelerator Module for Deep Neural Networks
    Appukuttan, Allen
    Thomas, Emmanuel
    Nair, Harinandan R.
    Hemanth, S.
    Dhanaraj, K. J.
    Azeez, Maleeha Abdul
    2022 IEEE 19TH INDIA COUNCIL INTERNATIONAL CONFERENCE, INDICON, 2022,
  • [43] Ultralow-power in-memory computing based on ferroelectric memcapacitor network
    Tian, Bobo
    Xie, Zhuozhuang
    Chen, Luqiu
    Hao, Shenglan
    Liu, Yifei
    Feng, Guangdi
    Liu, Xuefeng
    Liu, Hongbo
    Yang, Jing
    Zhang, Yuanyuan
    Bai, Wei
    Lin, Tie
    Shen, Hong
    Meng, Xiangjian
    Zhong, Ni
    Peng, Hui
    Yue, Fangyu
    Tang, Xiaodong
    Wang, Jianlu
    Zhu, Qiuxiang
    Ivry, Yachin
    Dkhil, Brahim
    Chu, Junhao
    Duan, Chungang
    EXPLORATION, 2023, 3 (03):
  • [44] Efficient Discrete Temporal Coding Spike-Driven In-Memory Computing Macro for Deep Neural Network Based on Nonvolatile Memory
    Han, Lixia
    Huang, Peng
    Wang, Yijiao
    Zhou, Zheng
    Zhang, Yizhou
    Liu, Xiaoyan
    Kang, Jinfeng
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2022, 69 (11) : 4487 - 4498
  • [45] XNOR-BSNN: In-Memory Computing Model for Deep Binarized Spiking Neural Network
    Nguyen, Van-Tinh
    Quang-Kien Trinh
    Zhang, Renyuan
    Nakashima, Yasuhiko
    2021 INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE BIG DATA AND INTELLIGENT SYSTEMS (HPBD&IS), 2021, : 17 - 21
  • [46] Device Variation Effects on Neural Network Inference Accuracy in Analog In-Memory Computing Systems
    Wang, Qiwen
    Park, Yongmo
    Lu, Wei D.
    ADVANCED INTELLIGENT SYSTEMS, 2022, 4 (08)
  • [47] Time-Multiplexed Flash ADC for Deep Neural Network Analog in-Memory Computing
    Boni, Andrea
    Frattini, Francesco
    Caselli, Michele
    2021 28TH IEEE INTERNATIONAL CONFERENCE ON ELECTRONICS, CIRCUITS, AND SYSTEMS (IEEE ICECS 2021), 2021,
  • [48] Ternary Output Binary Neural Network With Zero-Skipping for MRAM-Based Digital In-Memory Computing
    Na, Taehui
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2023, 70 (07) : 2655 - 2659
  • [49] SMTJ-based Dropout Module for In-Memory Computing Bayesian Neural Networks
    Danouchi, Kamal
    Prenat, Guillaume
    Anghel, Lorena
    2024 IEEE 24TH INTERNATIONAL CONFERENCE ON NANOTECHNOLOGY, NANO 2024, 2024, : 501 - 506
  • [50] Polyhedral-Based Compilation Framework for In-Memory Neural Network Accelerators
    Han, Jianhui
    Fei, Xiang
    Li, Zhaolin
    Zhang, Youhui
    ACM JOURNAL ON EMERGING TECHNOLOGIES IN COMPUTING SYSTEMS, 2022, 18 (01)