Imperceptible Misclassification Attack on Deep Learning Accelerator by Glitch Injection

被引:22
|
作者
Liu, Wenye [1 ]
Chang, Chip-Hong [1 ]
Zhang, Fan [2 ]
Lou, Xiaoxuan [2 ]
机构
[1] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore, Singapore
[2] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou, Zhejiang, Peoples R China
关键词
NEURAL-NETWORKS;
D O I
10.1109/dac18072.2020.9218577
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
The convergence of edge computing and deep learning empowers endpoint hardwares or edge devices to perform inferences locally with the help of deep neural network (DNN) accelerator. This trend of edge intelligence invites new attack vectors, which are methodologically different from the well-known software oriented deep learning attacks like the input of adversarial examples. Current studies of threats on DNN hardware focus mainly on model parameters interpolation. Such kind of manipulation is not stealthy as it will leave non-erasable traces or create conspicuous output patterns. In this paper, we present and investigate an imperceptible misclassification attack on DNN hardware by introducing infrequent instantaneous glitches into the clock signal. Comparing with falsifying model parameters by permanent faults, corruption of targeted intermediate results of convolution layer(s) by disrupting associated computations intermittently leaves no trace. We demonstrated our attack on nine state-of-the-art ImageNet models running on Xilinx FPGA based deep learning accelerator. With no knowledge about the models, our attack can achieve over 98% misclassification on 8 out of 9 models with only 10% glitches launched into the computation clock cycles. Given the model details and inputs, all the test images applied to ResNet50 can be successfully misclassified with no more than 1.7% glitch injection.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] Stealthy and Robust Glitch Injection Attack on Deep Learning Accelerator for Target With Variational Viewpoint
    Liu, Wenye
    Chang, Chip-Hong
    Zhang, Fan
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 : 1928 - 1942
  • [2] WiCAM: Imperceptible Adversarial Attack on Deep Learning based WiFi Sensing
    Xu, Leiyang
    Zheng, Xiaolong
    Li, Xiangyuan
    Zhang, Yucheng
    Liu, Liang
    Ma, Huadong
    [J]. 2022 19TH ANNUAL IEEE INTERNATIONAL CONFERENCE ON SENSING, COMMUNICATION, AND NETWORKING (SECON), 2022, : 10 - 18
  • [3] Lightning: Leveraging DVFS-induced Transient Fault Injection to Attack Deep Learning Accelerator of GPUs
    Sun, Rihui
    Qiu, Pengfei
    Lyu, Yongqiang
    Dong, Jian
    Wang, Haixia
    Wang, Dongsheng
    Qu, Gang
    [J]. ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS, 2024, 29 (01)
  • [4] Imperceptible graph injection attack on graph neural networks
    Yang Chen
    Zhonglin Ye
    Zhaoyang Wang
    Haixing Zhao
    [J]. Complex & Intelligent Systems, 2024, 10 : 869 - 883
  • [5] Imperceptible graph injection attack on graph neural networks
    Chen, Yang
    Ye, Zhonglin
    Wang, Zhaoyang
    Zhao, Haixing
    [J]. COMPLEX & INTELLIGENT SYSTEMS, 2024, 10 (01) : 869 - 883
  • [6] Modeling and Efficiency Analysis of Clock Glitch Fault Injection Attack
    Ning, Bo
    Liu, Qiang
    [J]. PROCEEDINGS OF THE 2018 ASIAN HARDWARE ORIENTED SECURITY AND TRUST SYMPOSIUM (ASIANHOST), 2018, : 13 - 18
  • [7] TensorClog: An Imperceptible Poisoning Attack on Deep Neural Network Applications
    Shen, Juncheng
    Zhu, Xiaolei
    Ma, De
    [J]. IEEE ACCESS, 2019, 7 : 41498 - 41506
  • [8] Novel Imperceptible Watermarking Attack Method Based on Residual Learning
    Li, Qi
    Wang, Chun-Peng
    Wang, Xiao-Yu
    Li, Jian
    Xia, Zhi-Qiu
    Gao, Suo
    Ma, Bin
    [J]. Ruan Jian Xue Bao/Journal of Software, 2023, 34 (09): : 4351 - 4361
  • [9] Untargeted Backdoor Attack Against Deep Neural Networks With Imperceptible Trigger
    Xue, Mingfu
    Wu, Yinghao
    Ni, Shifeng
    Zhang, Leo Yu
    Zhang, Yushu
    Liu, Weiqiang
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (03) : 5004 - 5013
  • [10] MERCURY: An Automated Remote Side-channel Attack to Nvidia Deep Learning Accelerator
    Yan, Xiaobei
    Lou, Xiaoxuan
    Xu, Guowen
    Qiu, Han
    Guo, Shangwei
    Chang, Chip Hong
    Zhang, Tianwei
    [J]. 2023 INTERNATIONAL CONFERENCE ON FIELD PROGRAMMABLE TECHNOLOGY, ICFPT, 2023, : 188 - 197