Fault Injection Attack on Deep Neural Network

被引:0
|
作者
Liu, Yannan [1 ]
Wei, Lingxiao
Luo, Bo
Xu, Qiang
机构
[1] Chinese Univ Hong Kong, Dept Comp Sci & Engn, Hong Kong, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
fault injection; neural network; misclassification;
D O I
暂无
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural network (DNN), being able to effectively learn from a training set and provide highly accurate classification results, has become the de-facto technique used in many mission-critical systems. The security of DNN itself is therefore of great concern. In this paper, we investigate the impact of fault injection attacks on DNN, wherein attackers try to misclassify a specified input pattern into an adversarial class by modifying the parameters used in DNN via fault injection. We propose two kinds of fault injection attacks to achieve this objective. Without considering stealthiness of the attack, single bias attack (SBA) only requires to modify one parameter in DNN for misclassification, based on the observation that the outputs of DNN may linearly depend on some parameters. Gradient descent attack (GDA) takes stealthiness into consideration. By controlling the amount of modification to DNN parameters, GDA is able to minimize the fault injection impact on input patterns other than the specified one. Experimental results demonstrate the effectiveness and efficiency of the proposed attacks.
引用
收藏
页码:131 / 138
页数:8
相关论文
共 50 条
  • [1] Backdoor Attack on Deep Neural Networks Triggered by Fault Injection Attack on Image Sensor Interface
    Oyama, Tatsuya
    Okura, Shunsuke
    Yoshida, Kota
    Fujino, Takeshi
    [J]. SENSORS, 2023, 23 (10)
  • [2] Detection of Fault Data Injection Attack on UAV Using Adaptive Neural Network
    Abbaspour, Alireza
    Yen, Kang K.
    Noei, Shirin
    Sargolzaei, Arman
    [J]. COMPLEX ADAPTIVE SYSTEMS, 2016, 95 : 193 - 200
  • [3] Fault Injection and Safe-Error Attack for Extraction of Embedded Neural Network Models
    Hector, Kevin
    Moellic, Pierre-Alain
    Dutertre, Jean-Max
    Dumont, Mathieu
    [J]. COMPUTER SECURITY. ESORICS 2023 INTERNATIONAL WORKSHOPS, CPS4CIP, PT II, 2024, 14399 : 644 - 664
  • [4] Security Evaluation of Deep Neural Network Resistance Against Laser Fault Injection
    Hou, Xiaolu
    Breier, Jakub
    Jap, Dirmanto
    Ma, Lei
    Bhasin, Shivam
    Liu, Yang
    [J]. 2020 IEEE INTERNATIONAL SYMPOSIUM ON THE PHYSICAL AND FAILURE ANALYSIS OF INTEGRATED CIRCUITS (IPFA), 2020,
  • [5] POSTER: Practical Fault Attack on Deep Neural Networks
    Breier, Jakub
    Hou, Xiaolu
    Jap, Dirmanto
    Ma, Lei
    Bhasin, Shivam
    Liu, Yang
    [J]. PROCEEDINGS OF THE 2018 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'18), 2018, : 2204 - 2206
  • [6] Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial Attack
    He, Zhezhi
    Rakin, Adnan Siraj
    Fan, Deliang
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 588 - 597
  • [7] FooBaR: Fault Fooling Backdoor Attack on Neural Network Training
    Breier, Jakub
    Hou, Xiaolu
    Ochoa, Martin
    Solano, Jesus
    [J]. IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (03) : 1895 - 1908
  • [8] Cyber Attack Detection by Using Neural Network Approaches: Shallow Neural Network, Deep Neural Network and AutoEncoder
    Ustebay, Serpil
    Turgut, Zeynep
    Aydin, M. Ali
    [J]. COMPUTER NETWORKS, CN 2019, 2019, 1039 : 144 - 155
  • [9] Use neural network to improve fault injection testing
    Wang, Yichen
    Wang, Yikun
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE QUALITY, RELIABILITY AND SECURITY COMPANION (QRS-C), 2017, : 377 - 384
  • [10] JTAG Fault Injection Attack
    Majeric, F.
    Gonzalvo, B.
    Bossuet, L.
    [J]. IEEE EMBEDDED SYSTEMS LETTERS, 2018, 10 (03) : 65 - 68