Patch Based Backdoor Attack on Deep Neural Networks

被引:0
|
作者
Manna, Debasmita [1 ]
Tripathy, Somanath [1 ]
机构
[1] Indian Inst Technol Patna, Dept Comp Sci & Engn, Patna, Bihar, India
来源
关键词
Data poisoning; model security; patch generation;
D O I
10.1007/978-3-031-80020-7_24
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural networks (DNNs) have become prevalent and being used across various fields. Meanwhile, their extensive use has been raising some major security concerns. DNNs can be fooled by an adversary as a small intelligent change in input would cause change of output label. Existing methodologies require model retraining on a poisoned dataset or inserting additional multilayer perceptron (MLPs), which involves additional computation and time constraints. This work presents a backdoor attack, which generates a small patch to misclassify the prediction, if added to the image. Interestingly, the patch does not affect the physical appearance of the image. The patch is generated by determining influential features through sensitivity analysis. Subsequently, negative-contributing features are generated as the intended patch using Intersection over union (IoU). The most interesting part of our proposed technique is that it does not require the model re-training or any alterations to the model. Experiments on three different types of datasets (MNIST, CIFAR-10, and GTSRB) demonstrate the effectiveness of the attack. It is observed that our proposed method achieves a higher attack success rate around 50-70%, without compromising the test accuracy for clean input samples.
引用
收藏
页码:422 / 440
页数:19
相关论文
共 50 条
  • [1] Inconspicuous Data Augmentation Based Backdoor Attack on Deep Neural Networks
    Xu, Chaohui
    Liu, Wenye
    Zheng, Yue
    Wang, Si
    Chang, Chip-Hong
    2022 IEEE 35TH INTERNATIONAL SYSTEM-ON-CHIP CONFERENCE (IEEE SOCC 2022), 2022, : 237 - 242
  • [2] Backdoor Attack on Deep Neural Networks in Perception Domain
    Mo, Xiaoxing
    Zhang, Leo Yu
    Sun, Nan
    Luo, Wei
    Gao, Shang
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [3] Adaptive Backdoor Attack against Deep Neural Networks
    He, Honglu
    Zhu, Zhiying
    Zhang, Xinpeng
    CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2023, 136 (03): : 2617 - 2633
  • [4] Hibernated Backdoor: A Mutual Information Empowered Backdoor Attack to Deep Neural Networks
    Ning, Rui
    Li, Jiang
    Xin, Chunsheng
    Wu, Hongyi
    Wang, Chonggang
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 10309 - 10318
  • [5] Universal backdoor attack on deep neural networks for malware detection
    Zhang, Yunchun
    Feng, Fan
    Liao, Zikun
    Li, Zixuan
    Yao, Shaowen
    APPLIED SOFT COMPUTING, 2023, 143
  • [6] Multi-Targeted Backdoor: Indentifying Backdoor Attack for Multiple Deep Neural Networks
    Kwon, Hyun
    Yoon, Hyunsoo
    Park, Ki-Woong
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2020, E103D (04): : 883 - 887
  • [7] Compression-resistant backdoor attack against deep neural networks
    Mingfu Xue
    Xin Wang
    Shichang Sun
    Yushu Zhang
    Jian Wang
    Weiqiang Liu
    Applied Intelligence, 2023, 53 : 20402 - 20417
  • [8] SGBA: A stealthy scapegoat backdoor attack against deep neural networks
    He, Ying
    Shen, Zhili
    Xia, Chang
    Hua, Jingyu
    Tong, Wei
    Zhong, Sheng
    COMPUTERS & SECURITY, 2024, 136
  • [9] Untargeted Backdoor Attack Against Deep Neural Networks With Imperceptible Trigger
    Xue, Mingfu
    Wu, Yinghao
    Ni, Shifeng
    Zhang, Leo Yu
    Zhang, Yushu
    Liu, Weiqiang
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (03) : 5004 - 5013
  • [10] Compression-resistant backdoor attack against deep neural networks
    Xue, Mingfu
    Wang, Xin
    Sun, Shichang
    Zhang, Yushu
    Wang, Jian
    Liu, Weiqiang
    APPLIED INTELLIGENCE, 2023, 53 (17) : 20402 - 20417