Multi-Targeted Backdoor: Indentifying Backdoor Attack for Multiple Deep Neural Networks

被引:20
|
作者
Kwon, Hyun [1 ,2 ]
Yoon, Hyunsoo [1 ]
Park, Ki-Woong [3 ]
机构
[1] Korea Adv Inst Sci & Technol, Sch Comp, Daejeon, South Korea
[2] Korea Mil Acad, Dept Elect Engn, Seoul, South Korea
[3] Sejong Univ, Dept Comp & Informat Secur, Seoul, South Korea
来源
基金
新加坡国家研究基金会;
关键词
machine learning; deep neural network; backdoor attack; poisoning attack; adversarial example;
D O I
10.1587/transinf.2019EDL8170
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We propose a multi-targeted backdoor that misleads different models to different classes. The method trains multiple models with data that include specific triggers that will be misclassified by different models into different classes. For example, an attacker can use a single multi-targeted backdoor sample to make model A recognize it as a stop sign, model B as a left-turn sign, model C as a right-turn sign, and model D as a U-turn sign. We used MNIST and Fashion-MNIST as experimental datasets and Tensorflow as a machine learning library. Experimental results show that the proposed method with a trigger can cause misclassification as different classes by different models with a 100% attack success rate on MNIST and Fashion-MNIST while maintaining the 97.18% and 91.1% accuracy, respectively, on data without a trigger.
引用
收藏
页码:883 / 887
页数:5
相关论文
共 50 条
  • [21] Towards Practical Deployment-Stage Backdoor Attack on Deep Neural Networks
    Qi, Xiangyu
    Xie, Tinghao
    Pan, Ruizhe
    Zhu, Jifeng
    Yang, Yong
    Bu, Kai
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 13337 - 13347
  • [22] Invisible Poison: A Blackbox Clean Label Backdoor Attack to Deep Neural Networks
    Ning, Rui
    Li, Jiang
    Xin, Chunsheng
    Wu, Hongyi
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2021), 2021,
  • [23] Sniper Backdoor: Single Client Targeted Backdoor Attack in Federated Learning
    Abad, Gorka
    Paguada, Servio
    Ersoy, Oguzhan
    Picek, Stjepan
    Ramirez-Duran, Victor Julio
    Urbieta, Aitor
    2023 IEEE CONFERENCE ON SECURE AND TRUSTWORTHY MACHINE LEARNING, SATML, 2023, : 377 - 391
  • [24] Backdoor Attack on Deep Neural Networks Triggered by Fault Injection Attack on Image Sensor Interface
    Oyama, Tatsuya
    Okura, Shunsuke
    Yoshida, Kota
    Fujino, Takeshi
    SENSORS, 2023, 23 (10)
  • [25] A backdoor attack against quantum neural networks with limited information
    Huang, Chen-Yi
    Zhang, Shi-Bin
    CHINESE PHYSICS B, 2023, 32 (10)
  • [26] Effective Backdoor Attack on Graph Neural Networks in Spectral Domain
    Zhao, Xiangyu
    Wu, Hanzhou
    Zhang, Xinpeng
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (07) : 12102 - 12114
  • [27] Shadow backdoor attack: Multi-intensity backdoor attack against federated learning
    Ren, Qixian
    Zheng, Yu
    Yang, Chao
    Li, Yue
    Ma, Jianfeng
    COMPUTERS & SECURITY, 2024, 139
  • [28] Detection of backdoor attacks using targeted universal adversarial perturbations for deep neural networks
    Qu, Yubin
    Huang, Song
    Chen, Xiang
    Wang, Xingya
    Yao, Yongming
    JOURNAL OF SYSTEMS AND SOFTWARE, 2024, 207
  • [29] A backdoor attack against quantum neural networks with limited information
    黄晨猗
    张仕斌
    Chinese Physics B, 2023, 32 (10) : 260 - 269
  • [30] On the Robustness of Backdoor-basedWatermarking in Deep Neural Networks
    Shafieinejad, Masoumeh
    Lukas, Nils
    Wang, Jiaqi
    Li, Xinda
    Kerschbaum, Florian
    PROCEEDINGS OF THE 2021 ACM WORKSHOP ON INFORMATION HIDING AND MULTIMEDIA SECURITY, IH&MMSEC 2021, 2021, : 177 - 188