Attacking a Joint Protection Scheme for Deep Neural Network Hardware Accelerators and Models

被引:0
|
作者
Wilhelmstaetter, Simon [1 ]
Conrad, Joschua [1 ]
Upadhyaya, Devanshi [2 ]
Polian, Ilia [2 ]
Ortmanns, Maurits [1 ]
机构
[1] Univ Ulm, Inst Microelect, Albert Einstein Allee 43, Ulm, Germany
[2] Univ Stuttgart, Inst Comp Architecture & Comp Engn, Pfaffenwaldring 47, Stuttgart, Germany
基金
美国国家科学基金会;
关键词
Neural Network; Inference; Accelerator; Logic Locking; Security; Protection Scheme;
D O I
10.1109/AICAS59952.2024.10595935
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The tremendous success of artificial neural networks (NNs) in recent years, paired with the leap of embedded, low-power devices (e.g. IoT, wearables and smart sensors), gave rise to specialized NN accelerators that enable the inference of NNs in power-constrained environments. However, manufacturing or operating such accelerators in un-trusted environments poses risks of undesired model theft and hardware counterfeiting. One way to protect NN hardware against those threats is by locking both the model and the accelerator with secret keys that can only be supplied by entitled authorities (e.g. chip designer or distributor). However, current locking mechanisms contain severe drawbacks, such as required model retraining and vulnerability to the powerful satisfyability checking (SAT)-attack. Recently, an approach for jointly protecting the model and the accelerator was proposed. Compared to previous locking mechanisms, it promises to avoid model retraining, not leak useful model information, and resist the SAT-attack, thereby securing the NN accelerator against counterfeiting and the model against intellectual property infringement. In this paper, those claims are thoroughly evaluated and severe issues in the technical evidence are identified. Furthermore, an attack is developed that does not require an expanded threat model but is still able to completely circumvent all of the proposed protection schemes. It allows to reconstruct all NN model parameters (i.e. model theft) and enables hardware counterfeiting.
引用
收藏
页码:144 / 148
页数:5
相关论文
共 50 条
  • [1] Joint Protection Scheme for Deep Neural Network Hardware Accelerators and Models
    Zhou, Jingbo
    Zhang, Xinmiao
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2023, 42 (12) : 4518 - 4527
  • [2] An overview memristor based hardware accelerators for deep neural network
    Gokgoz, Baki
    Gul, Fatih
    Aydin, Tolga
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2024, 36 (09):
  • [3] Hardware Approximate Techniques for Deep Neural Network Accelerators: A Survey
    Armeniakos, Giorgos
    Zervakis, Georgios
    Soudris, Dimitrios
    Henkel, Joerg
    ACM COMPUTING SURVEYS, 2023, 55 (04)
  • [4] SEALing Neural Network Models in Encrypted Deep Learning Accelerators
    Zuo, Pengfei
    Hua, Yu
    Liang, Ling
    Xie, Xinfeng
    Hu, Xing
    Xie, Yuan
    2021 58TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2021, : 1255 - 1260
  • [5] Surrogate Model based Co-Optimization of Deep Neural Network Hardware Accelerators
    Woehrle, Hendrik
    Alvarez, Mariela De Lucas
    Schlenke, Fabian
    Walsemann, Alexander
    Karagounis, Michael
    Kirchner, Frank
    2021 IEEE INTERNATIONAL MIDWEST SYMPOSIUM ON CIRCUITS AND SYSTEMS (MWSCAS), 2021, : 40 - 45
  • [6] Special Session: Effective In-field Testing of Deep Neural Network Hardware Accelerators
    Kundu, Shamik
    Banerjee, Suvadeep
    Raha, Arnab
    Basu, Kanad
    2022 IEEE 40TH VLSI TEST SYMPOSIUM (VTS), 2022,
  • [7] Exploring Quantization and Mapping Synergy in Hardware-Aware Deep Neural Network Accelerators
    Klhufek, Jan
    Safar, Miroslav
    Mrazek, Vojtech
    Vasicek, Zdenek
    Sekanina, Lukas
    2024 27TH INTERNATIONAL SYMPOSIUM ON DESIGN & DIAGNOSTICS OF ELECTRONIC CIRCUITS & SYSTEMS, DDECS, 2024, : 1 - 6
  • [8] A training method for deep neural network inference accelerators with high tolerance for their hardware imperfection
    Gao, Shuchao
    Ohsawa, Takashi
    JAPANESE JOURNAL OF APPLIED PHYSICS, 2024, 63 (02)
  • [9] Efficient Hardware Approximation for Bit-Decomposition Based Deep Neural Network Accelerators
    Soliman, Taha
    Eldebiky, Amro
    De La Parra, Cecilia
    Guntoro, Andre
    Wehn, Norbert
    2022 IEEE 35TH INTERNATIONAL SYSTEM-ON-CHIP CONFERENCE (IEEE SOCC 2022), 2022, : 77 - 82
  • [10] Memory Requirements for Convolutional Neural Network Hardware Accelerators
    Siu, Kevin
    Stuart, Dylan Malone
    Mahmoud, Mostafa
    Moshovos, Andreas
    2018 IEEE INTERNATIONAL SYMPOSIUM ON WORKLOAD CHARACTERIZATION (IISWC), 2018, : 111 - 121