Attacking a Joint Protection Scheme for Deep Neural Network Hardware Accelerators and Models

被引:0
|
作者
Wilhelmstaetter, Simon [1 ]
Conrad, Joschua [1 ]
Upadhyaya, Devanshi [2 ]
Polian, Ilia [2 ]
Ortmanns, Maurits [1 ]
机构
[1] Univ Ulm, Inst Microelect, Albert Einstein Allee 43, Ulm, Germany
[2] Univ Stuttgart, Inst Comp Architecture & Comp Engn, Pfaffenwaldring 47, Stuttgart, Germany
基金
美国国家科学基金会;
关键词
Neural Network; Inference; Accelerator; Logic Locking; Security; Protection Scheme;
D O I
10.1109/AICAS59952.2024.10595935
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The tremendous success of artificial neural networks (NNs) in recent years, paired with the leap of embedded, low-power devices (e.g. IoT, wearables and smart sensors), gave rise to specialized NN accelerators that enable the inference of NNs in power-constrained environments. However, manufacturing or operating such accelerators in un-trusted environments poses risks of undesired model theft and hardware counterfeiting. One way to protect NN hardware against those threats is by locking both the model and the accelerator with secret keys that can only be supplied by entitled authorities (e.g. chip designer or distributor). However, current locking mechanisms contain severe drawbacks, such as required model retraining and vulnerability to the powerful satisfyability checking (SAT)-attack. Recently, an approach for jointly protecting the model and the accelerator was proposed. Compared to previous locking mechanisms, it promises to avoid model retraining, not leak useful model information, and resist the SAT-attack, thereby securing the NN accelerator against counterfeiting and the model against intellectual property infringement. In this paper, those claims are thoroughly evaluated and severe issues in the technical evidence are identified. Furthermore, an attack is developed that does not require an expanded threat model but is still able to completely circumvent all of the proposed protection schemes. It allows to reconstruct all NN model parameters (i.e. model theft) and enables hardware counterfeiting.
引用
收藏
页码:144 / 148
页数:5
相关论文
共 50 条
  • [31] SENNA: Unified Hardware/Software Space Exploration for Parametrizable Neural Network Accelerators
    Kwon, Jungyoon
    Min, Yemi
    Egger, Bernhard
    ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2025, 24 (02)
  • [32] DSE-Based Hardware Trojan Attack for Neural Network Accelerators on FPGAs
    Guo, Chao
    Yanagisawa, Masao
    Shi, Youhua
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [33] Vessel Identification using Convolutional Neural Network-based Hardware Accelerators
    Boyer, Alexandre
    Abiemona, Rami
    Bolic, Miodrag
    Petriu, Emil
    2021 IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND VIRTUAL ENVIRONMENTS FOR MEASUREMENT SYSTEMS AND APPLICATIONS (IEEE CIVEMSA 2021), 2021,
  • [34] BenQ: Benchmarking Automated Quantization on Deep Neural Network Accelerators
    Wei, Zheng
    Zhang, Xingjun
    Li, Jingbo
    Ji, Zeyu
    Wei, Jia
    PROCEEDINGS OF THE 2022 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2022), 2022, : 1479 - 1484
  • [35] Tango: A Deep Neural Network Benchmark Suite for Various Accelerators
    Karki, Aajna
    Keshava, Chethan Palangotu
    Shivakumar, Spoorthi Mysore
    Skow, Joshua
    Hegde, Goutam Madhukeshwar
    Jeon, Hyeran
    2019 IEEE INTERNATIONAL SYMPOSIUM ON PERFORMANCE ANALYSIS OF SYSTEMS AND SOFTWARE (ISPASS), 2019, : 137 - 138
  • [36] LAMBDA: An Open Framework for Deep Neural Network Accelerators Simulation
    Russo, Enrico
    Palesi, Maurizio
    Monteleone, Salvatore
    Patti, Davide
    Ascia, Giuseppe
    Catania, Vincenzo
    2021 IEEE INTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING AND COMMUNICATIONS WORKSHOPS AND OTHER AFFILIATED EVENTS (PERCOM WORKSHOPS), 2021, : 161 - 166
  • [37] AINoC: New Interconnect for Future Deep Neural Network Accelerators
    Krichene, Hana
    Prasad, Rohit
    Mouhagir, Ayoub
    DESIGN AND ARCHITECTURE FOR SIGNAL AND IMAGE PROCESSING, DASIP 2023, 2023, 13879 : 55 - 69
  • [38] Kernel Mapping Techniques for Deep Learning Neural Network Accelerators
    Ozdemir, Sarp
    Khasawneh, Mohammad
    Rao, Smriti
    Madden, Patrick H.
    ISPD'22: PROCEEDINGS OF THE 2022 INTERNATIONAL SYMPOSIUM ON PHYSICAL DESIGN, 2022, : 21 - 28
  • [39] Optimizing Memory Efficiency for Deep Convolutional Neural Network Accelerators
    Li, Xiaowei
    Li, Jiajun
    Yan, Guihai
    JOURNAL OF LOW POWER ELECTRONICS, 2018, 14 (04) : 496 - 507
  • [40] A New Constant Coefficient Multiplier for Deep Neural Network Accelerators
    Manoj, B. R.
    Yaji, Jayashree S.
    Raghuram, S.
    2022 IEEE 3RD INTERNATIONAL CONFERENCE ON VLSI SYSTEMS, ARCHITECTURE, TECHNOLOGY AND APPLICATIONS, VLSI SATA, 2022,