Patch Based Backdoor Attack on Deep Neural Networks

被引:0
|
作者
Manna, Debasmita [1 ]
Tripathy, Somanath [1 ]
机构
[1] Indian Inst Technol Patna, Dept Comp Sci & Engn, Patna, Bihar, India
来源
关键词
Data poisoning; model security; patch generation;
D O I
10.1007/978-3-031-80020-7_24
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep neural networks (DNNs) have become prevalent and being used across various fields. Meanwhile, their extensive use has been raising some major security concerns. DNNs can be fooled by an adversary as a small intelligent change in input would cause change of output label. Existing methodologies require model retraining on a poisoned dataset or inserting additional multilayer perceptron (MLPs), which involves additional computation and time constraints. This work presents a backdoor attack, which generates a small patch to misclassify the prediction, if added to the image. Interestingly, the patch does not affect the physical appearance of the image. The patch is generated by determining influential features through sensitivity analysis. Subsequently, negative-contributing features are generated as the intended patch using Intersection over union (IoU). The most interesting part of our proposed technique is that it does not require the model re-training or any alterations to the model. Experiments on three different types of datasets (MNIST, CIFAR-10, and GTSRB) demonstrate the effectiveness of the attack. It is observed that our proposed method achieves a higher attack success rate around 50-70%, without compromising the test accuracy for clean input samples.
引用
收藏
页码:422 / 440
页数:19
相关论文
共 50 条
  • [41] BlindNet backdoor: Attack on deep neural network using blind watermark
    Hyun Kwon
    Yongchul Kim
    Multimedia Tools and Applications, 2022, 81 : 6217 - 6234
  • [42] Stealthy dynamic backdoor attack against neural networks for image classification
    Dong, Liang
    Qiu, Jiawei
    Fu, Zhongwang
    Chen, Leiyang
    Cui, Xiaohui
    Shen, Zhidong
    APPLIED SOFT COMPUTING, 2023, 149
  • [43] INVISIBLE AND EFFICIENT BACKDOOR ATTACKS FOR COMPRESSED DEEP NEURAL NETWORKS
    Phan, Huy
    Xie, Yi
    Liu, Jian
    Chen, Yingying
    Yuan, Bo
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 96 - 100
  • [44] Backdoor Attacks on Image Classification Models in Deep Neural Networks
    Zhang, Quanxin
    Ma, Wencong
    Wang, Yajie
    Zhang, Yaoyuan
    Shi, Zhiwei
    Li, Yuanzhang
    CHINESE JOURNAL OF ELECTRONICS, 2022, 31 (02) : 199 - 212
  • [45] Natural Backdoor Attacks on Deep Neural Networks via Raindrops
    Zhao, Feng
    Zhou, Li
    Zhong, Qi
    Lan, Rushi
    Zhang, Leo Yu
    SECURITY AND COMMUNICATION NETWORKS, 2022, 2022
  • [46] Composite Backdoor Attack for Deep Neural Network by Mixing Existing Benign Features
    Lin, Junyu
    Xu, Lei
    Liu, Yingqi
    Zhang, Xiangyu
    CCS '20: PROCEEDINGS OF THE 2020 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2020, : 113 - 131
  • [47] Backdoor Mitigation in Deep Neural Networks via Strategic Retraining
    Dhonthi, Akshay
    Hahn, Ernst Moritz
    Hashemi, Vahid
    FORMAL METHODS, FM 2023, 2023, 14000 : 635 - 647
  • [48] Backdoor Attacks on Image Classification Models in Deep Neural Networks
    ZHANG Quanxin
    MA Wencong
    WANG Yajie
    ZHANG Yaoyuan
    SHI Zhiwei
    LI Yuanzhang
    Chinese Journal of Electronics, 2022, 31 (02) : 199 - 212
  • [49] Attack on Deep Steganalysis Neural Networks
    Li, Shiyu
    Ye, Dengpan
    Jiang, Shunzhi
    Liu, Changrui
    Niu, Xiaoguang
    Luo, Xiangyang
    CLOUD COMPUTING AND SECURITY, PT IV, 2018, 11066 : 265 - 276
  • [50] Watermarking Graph Neural Networks based on Backdoor Attacks
    Xu, Jing
    Koffas, Stefanos
    Ersoy, Oguzhan
    Picek, Stjepan
    2023 IEEE 8TH EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY, EUROS&P, 2023, : 1179 - 1197