Patch Based Backdoor Attack on Deep Neural Networks

被引:0
|
作者
Manna, Debasmita [1 ]
Tripathy, Somanath [1 ]
机构
[1] Indian Inst Technol Patna, Dept Comp Sci & Engn, Patna, Bihar, India
来源
关键词
Data poisoning; model security; patch generation;
D O I
10.1007/978-3-031-80020-7_24
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural networks (DNNs) have become prevalent and being used across various fields. Meanwhile, their extensive use has been raising some major security concerns. DNNs can be fooled by an adversary as a small intelligent change in input would cause change of output label. Existing methodologies require model retraining on a poisoned dataset or inserting additional multilayer perceptron (MLPs), which involves additional computation and time constraints. This work presents a backdoor attack, which generates a small patch to misclassify the prediction, if added to the image. Interestingly, the patch does not affect the physical appearance of the image. The patch is generated by determining influential features through sensitivity analysis. Subsequently, negative-contributing features are generated as the intended patch using Intersection over union (IoU). The most interesting part of our proposed technique is that it does not require the model re-training or any alterations to the model. Experiments on three different types of datasets (MNIST, CIFAR-10, and GTSRB) demonstrate the effectiveness of the attack. It is observed that our proposed method achieves a higher attack success rate around 50-70%, without compromising the test accuracy for clean input samples.
引用
收藏
页码:422 / 440
页数:19
相关论文
共 50 条
  • [21] Backdoor smoothing: Demystifying backdoor attacks on deep neural networks
    Grosse, Kathrin
    Lee, Taesung
    Biggio, Battista
    Park, Youngja
    Backes, Michael
    Molloy, Ian
    Computers and Security, 2022, 120
  • [22] Latent Backdoor Attacks on Deep Neural Networks
    Yao, Yuanshun
    Li, Huiying
    Zheng, Haitao
    Zhao, Ben Y.
    PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 2041 - 2055
  • [23] DeepGuard: Backdoor Attack Detection and Identification Schemes in Privacy-Preserving Deep Neural Networks
    Chen, Congcong
    Wei, Lifei
    Zhang, Lei
    Peng, Ya
    Ning, Jianting
    SECURITY AND COMMUNICATION NETWORKS, 2022, 2022
  • [24] Spatialspectral-Backdoor: Realizing backdoor attack for deep neural networks in brain-computer interface via EEG characteristics
    Li, Fumin
    Huang, Mengjie
    You, Wenlong
    Zhu, Longsheng
    Cheng, Hanjing
    Yang, Rui
    NEUROCOMPUTING, 2025, 616
  • [25] Defending Deep Neural Networks Against Backdoor Attack by Using De-Trigger Autoencoder
    Kwon, Hyun
    IEEE ACCESS, 2025, 13 : 11159 - 11169
  • [26] Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs
    Zheng, Haibin
    Xiong, Haiyang
    Chen, Jinyin
    Ma, Haonan
    Huang, Guohan
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024, 11 (02): : 2479 - 2493
  • [27] Critical Path-Based Backdoor Detection for Deep Neural Networks
    Jiang, Wei
    Wen, Xiangyu
    Zhan, Jinyu
    Wang, Xupeng
    Song, Ziwei
    Bian, Chen
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (03) : 4032 - 4046
  • [28] A backdoor attack against quantum neural networks with limited information
    Huang, Chen-Yi
    Zhang, Shi-Bin
    CHINESE PHYSICS B, 2023, 32 (10)
  • [29] Effective Backdoor Attack on Graph Neural Networks in Spectral Domain
    Zhao, Xiangyu
    Wu, Hanzhou
    Zhang, Xinpeng
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (07) : 12102 - 12114
  • [30] TrojanFlow: A Neural Backdoor Attack to Deep Learning-based Network Traffic Classifiers
    Ning, Rui
    Xin, Chunsheng
    Wu, Hongyi
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2022), 2022, : 1429 - 1438