Patch Based Backdoor Attack on Deep Neural Networks

被引:0
|
作者
Manna, Debasmita [1 ]
Tripathy, Somanath [1 ]
机构
[1] Indian Inst Technol Patna, Dept Comp Sci & Engn, Patna, Bihar, India
来源
关键词
Data poisoning; model security; patch generation;
D O I
10.1007/978-3-031-80020-7_24
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural networks (DNNs) have become prevalent and being used across various fields. Meanwhile, their extensive use has been raising some major security concerns. DNNs can be fooled by an adversary as a small intelligent change in input would cause change of output label. Existing methodologies require model retraining on a poisoned dataset or inserting additional multilayer perceptron (MLPs), which involves additional computation and time constraints. This work presents a backdoor attack, which generates a small patch to misclassify the prediction, if added to the image. Interestingly, the patch does not affect the physical appearance of the image. The patch is generated by determining influential features through sensitivity analysis. Subsequently, negative-contributing features are generated as the intended patch using Intersection over union (IoU). The most interesting part of our proposed technique is that it does not require the model re-training or any alterations to the model. Experiments on three different types of datasets (MNIST, CIFAR-10, and GTSRB) demonstrate the effectiveness of the attack. It is observed that our proposed method achieves a higher attack success rate around 50-70%, without compromising the test accuracy for clean input samples.
引用
收藏
页码:422 / 440
页数:19
相关论文
共 50 条
  • [31] A backdoor attack against quantum neural networks with limited information
    黄晨猗
    张仕斌
    Chinese Physics B, 2023, 32 (10) : 260 - 269
  • [32] PoisonedGNN: Backdoor Attack on Graph Neural Networks-Based Hardware Security Systems
    Alrahis, Lilas
    Patnaik, Satwik
    Hanif, Muhammad Abdullah
    Shafique, Muhammad
    Sinanoglu, Ozgur
    IEEE TRANSACTIONS ON COMPUTERS, 2023, 72 (10) : 2822 - 2834
  • [33] Reverse Backdoor Distillation: Towards Online Backdoor Attack Detection for Deep Neural Network Models
    Yao, Zeming
    Zhang, Hangtao
    Guo, Yicheng
    Tian, Xin
    Peng, Wei
    Zou, Yi
    Zhang, Leo Yu
    Chen, Chao
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (06) : 5098 - 5111
  • [34] Attacking Neural Networks with Neural Networks: Towards Deep Synchronization for Backdoor Attacks
    Guan, Zihan
    Sun, Lichao
    Du, Mengnan
    Liu, Ninghao
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 608 - 618
  • [35] Invisible and Multi-triggers Backdoor Attack Approach on Deep Neural Networks through Frequency Domain
    Sun, Fengxue
    Pei, Bei
    Chen, Guangyong
    2024 9TH INTERNATIONAL CONFERENCE ON SIGNAL AND IMAGE PROCESSING, ICSIP, 2024, : 707 - 711
  • [36] Detecting Backdoor Attacks on Deep Neural Networks Based on Model Parameters Analysis
    Ma, Mingyuan
    Li, Hu
    Kuang, Xiaohui
    2022 IEEE 34TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2022, : 630 - 637
  • [37] A Non-injected Traffic Backdoor Attack on Deep Neural Network
    Wang, Jiahui
    Yang, Jie
    Ma, Binhao
    Wang, Dejun
    Meng, Bo
    International Journal of Network Security, 2023, 25 (04) : 640 - 648
  • [38] On the Robustness of Backdoor-basedWatermarking in Deep Neural Networks
    Shafieinejad, Masoumeh
    Lukas, Nils
    Wang, Jiaqi
    Li, Xinda
    Kerschbaum, Florian
    PROCEEDINGS OF THE 2021 ACM WORKSHOP ON INFORMATION HIDING AND MULTIMEDIA SECURITY, IH&MMSEC 2021, 2021, : 177 - 188
  • [39] BlindNet backdoor: Attack on deep neural network using blind watermark
    Kwon, Hyun
    Kim, Yongchul
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (05) : 6217 - 6234
  • [40] Latent Space-Based Backdoor Attacks Against Deep Neural Networks
    Kristanto, Adrian
    Wang, Shuo
    Rudolph, Carsten
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,