Multi-Targeted Backdoor: Indentifying Backdoor Attack for Multiple Deep Neural Networks

被引:20
|
作者
Kwon, Hyun [1 ,2 ]
Yoon, Hyunsoo [1 ]
Park, Ki-Woong [3 ]
机构
[1] Korea Adv Inst Sci & Technol, Sch Comp, Daejeon, South Korea
[2] Korea Mil Acad, Dept Elect Engn, Seoul, South Korea
[3] Sejong Univ, Dept Comp & Informat Secur, Seoul, South Korea
来源
基金
新加坡国家研究基金会;
关键词
machine learning; deep neural network; backdoor attack; poisoning attack; adversarial example;
D O I
10.1587/transinf.2019EDL8170
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We propose a multi-targeted backdoor that misleads different models to different classes. The method trains multiple models with data that include specific triggers that will be misclassified by different models into different classes. For example, an attacker can use a single multi-targeted backdoor sample to make model A recognize it as a stop sign, model B as a left-turn sign, model C as a right-turn sign, and model D as a U-turn sign. We used MNIST and Fashion-MNIST as experimental datasets and Tensorflow as a machine learning library. Experimental results show that the proposed method with a trigger can cause misclassification as different classes by different models with a 100% attack success rate on MNIST and Fashion-MNIST while maintaining the 97.18% and 91.1% accuracy, respectively, on data without a trigger.
引用
收藏
页码:883 / 887
页数:5
相关论文
共 50 条
  • [31] DeepGuard: Backdoor Attack Detection and Identification Schemes in Privacy-Preserving Deep Neural Networks
    Chen, Congcong
    Wei, Lifei
    Zhang, Lei
    Peng, Ya
    Ning, Jianting
    SECURITY AND COMMUNICATION NETWORKS, 2022, 2022
  • [32] Attacking Neural Networks with Neural Networks: Towards Deep Synchronization for Backdoor Attacks
    Guan, Zihan
    Sun, Lichao
    Du, Mengnan
    Liu, Ninghao
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 608 - 618
  • [33] A Non-injected Traffic Backdoor Attack on Deep Neural Network
    Wang, Jiahui
    Yang, Jie
    Ma, Binhao
    Wang, Dejun
    Meng, Bo
    International Journal of Network Security, 2023, 25 (04) : 640 - 648
  • [34] BlindNet backdoor: Attack on deep neural network using blind watermark
    Kwon, Hyun
    Kim, Yongchul
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (05) : 6217 - 6234
  • [35] Stealthy dynamic backdoor attack against neural networks for image classification
    Dong, Liang
    Qiu, Jiawei
    Fu, Zhongwang
    Chen, Leiyang
    Cui, Xiaohui
    Shen, Zhidong
    APPLIED SOFT COMPUTING, 2023, 149
  • [36] A General Backdoor Attack to Graph Neural Networks Based on Explanation Method
    Chen, Luyao
    Yan, Na
    Zhang, Boyang
    Wang, Zhaoyang
    Wen, Yu
    Hu, Yanfei
    2022 IEEE INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, 2022, : 759 - 768
  • [37] BlindNet backdoor: Attack on deep neural network using blind watermark
    Hyun Kwon
    Yongchul Kim
    Multimedia Tools and Applications, 2022, 81 : 6217 - 6234
  • [38] Backdoor Attacks on Image Classification Models in Deep Neural Networks
    Zhang, Quanxin
    Ma, Wencong
    Wang, Yajie
    Zhang, Yaoyuan
    Shi, Zhiwei
    Li, Yuanzhang
    CHINESE JOURNAL OF ELECTRONICS, 2022, 31 (02) : 199 - 212
  • [39] INVISIBLE AND EFFICIENT BACKDOOR ATTACKS FOR COMPRESSED DEEP NEURAL NETWORKS
    Phan, Huy
    Xie, Yi
    Liu, Jian
    Chen, Yingying
    Yuan, Bo
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 96 - 100
  • [40] Natural Backdoor Attacks on Deep Neural Networks via Raindrops
    Zhao, Feng
    Zhou, Li
    Zhong, Qi
    Lan, Rushi
    Zhang, Leo Yu
    SECURITY AND COMMUNICATION NETWORKS, 2022, 2022