Attacking a Joint Protection Scheme for Deep Neural Network Hardware Accelerators and Models

被引:0
|
作者
Wilhelmstaetter, Simon [1 ]
Conrad, Joschua [1 ]
Upadhyaya, Devanshi [2 ]
Polian, Ilia [2 ]
Ortmanns, Maurits [1 ]
机构
[1] Univ Ulm, Inst Microelect, Albert Einstein Allee 43, Ulm, Germany
[2] Univ Stuttgart, Inst Comp Architecture & Comp Engn, Pfaffenwaldring 47, Stuttgart, Germany
基金
美国国家科学基金会;
关键词
Neural Network; Inference; Accelerator; Logic Locking; Security; Protection Scheme;
D O I
10.1109/AICAS59952.2024.10595935
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The tremendous success of artificial neural networks (NNs) in recent years, paired with the leap of embedded, low-power devices (e.g. IoT, wearables and smart sensors), gave rise to specialized NN accelerators that enable the inference of NNs in power-constrained environments. However, manufacturing or operating such accelerators in un-trusted environments poses risks of undesired model theft and hardware counterfeiting. One way to protect NN hardware against those threats is by locking both the model and the accelerator with secret keys that can only be supplied by entitled authorities (e.g. chip designer or distributor). However, current locking mechanisms contain severe drawbacks, such as required model retraining and vulnerability to the powerful satisfyability checking (SAT)-attack. Recently, an approach for jointly protecting the model and the accelerator was proposed. Compared to previous locking mechanisms, it promises to avoid model retraining, not leak useful model information, and resist the SAT-attack, thereby securing the NN accelerator against counterfeiting and the model against intellectual property infringement. In this paper, those claims are thoroughly evaluated and severe issues in the technical evidence are identified. Furthermore, an attack is developed that does not require an expanded threat model but is still able to completely circumvent all of the proposed protection schemes. It allows to reconstruct all NN model parameters (i.e. model theft) and enables hardware counterfeiting.
引用
收藏
页码:144 / 148
页数:5
相关论文
共 50 条
  • [21] A Survey on Memory Subsystems for Deep Neural Network Accelerators
    Asad, Arghavan
    Kaur, Rupinder
    Mohammadi, Farah
    FUTURE INTERNET, 2022, 14 (05):
  • [22] Dynamic Precision Multiplier For Deep Neural Network Accelerators
    Ding, Chen
    Yuxiang, Huan
    Zheng, Lirong
    Zou, Zhuo
    2020 IEEE 33RD INTERNATIONAL SYSTEM-ON-CHIP CONFERENCE (SOCC), 2020, : 180 - 184
  • [23] Hardware Implementation of Deep Network Accelerators Towards Healthcare and Biomedical Applications
    Azghadi, Mostafa Rahimi
    Lammie, Corey
    Eshraghian, Jason K.
    Payvand, Melika
    Donati, Elisa
    Linares-Barranco, Bernabe
    Indiveri, Giacomo
    IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS, 2020, 14 (06) : 1138 - 1159
  • [24] Quality-driven design of deep neural network hardware accelerators for low power CPS and IoT applications
    Jan, Yahya
    Jozwiak, Lech
    MICROPROCESSORS AND MICROSYSTEMS, 2024, 111
  • [25] Preventing Neural Network Model Exfiltration in Machine Learning Hardware Accelerators
    Isakov, Mihailo
    Bu, Lake
    Cheng, Hai
    Kinsy, Michel A.
    PROCEEDINGS OF THE 2018 ASIAN HARDWARE ORIENTED SECURITY AND TRUST SYMPOSIUM (ASIANHOST), 2018, : 62 - 67
  • [26] Hardware Accelerators for a Convolutional Neural Network in Condition Monitoring of CNC Machines
    Hoyer, Ingo
    Berg, Oscar
    Krupp, Lukas
    Utz, Alexander
    Wiede, Christian
    Seidl, Karsten
    2023 IEEE SENSORS, 2023,
  • [27] Using Libraries of Approximate Circuits in Design of Hardware Accelerators of Deep Neural Networks
    Mrazek, Vojtech
    Sekanina, Lukas
    Vasicek, Zdenek
    2020 2ND IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS 2020), 2020, : 243 - 247
  • [28] Improved Deep Neural Network hardware-accelerators based on Non-Volatile-Memory: the Local Gains technique
    Boybat, Irem
    di Nolfo, Carmelo
    Ambrogio, Stefano
    Bodini, Martina
    Farinha, Nathan C. P.
    Shelby, Robert M.
    Narayanan, Pritish
    Sidler, Severin
    Tsai, Hsinyu
    Leblebici, Yusuf
    Burr, Geoffrey W.
    2017 IEEE INTERNATIONAL CONFERENCE ON REBOOTING COMPUTING (ICRC), 2017, : 52 - 59
  • [29] Intellectual Property Protection of Deep Neural Network Models Based on Watermarking Technology
    Jin, Biao
    Lin, Xiang
    Xiong, Jinbo
    You, Weijing
    Li, Xuan
    Yao, Zhiqiang
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2024, 61 (10): : 2587 - 2606
  • [30] An Overview of Efficient Interconnection Networks for Deep Neural Network Accelerators
    Nabavinejad, Seyed Morteza
    Baharloo, Mohammad
    Chen, Kun-Chih
    Palesi, Maurizio
    Kogel, Tim
    Ebrahimi, Masoumeh
    IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS, 2020, 10 (03) : 268 - 282