Multi-Targeted Backdoor: Indentifying Backdoor Attack for Multiple Deep Neural Networks

被引:20
|
作者
Kwon, Hyun [1 ,2 ]
Yoon, Hyunsoo [1 ]
Park, Ki-Woong [3 ]
机构
[1] Korea Adv Inst Sci & Technol, Sch Comp, Daejeon, South Korea
[2] Korea Mil Acad, Dept Elect Engn, Seoul, South Korea
[3] Sejong Univ, Dept Comp & Informat Secur, Seoul, South Korea
来源
基金
新加坡国家研究基金会;
关键词
machine learning; deep neural network; backdoor attack; poisoning attack; adversarial example;
D O I
10.1587/transinf.2019EDL8170
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We propose a multi-targeted backdoor that misleads different models to different classes. The method trains multiple models with data that include specific triggers that will be misclassified by different models into different classes. For example, an attacker can use a single multi-targeted backdoor sample to make model A recognize it as a stop sign, model B as a left-turn sign, model C as a right-turn sign, and model D as a U-turn sign. We used MNIST and Fashion-MNIST as experimental datasets and Tensorflow as a machine learning library. Experimental results show that the proposed method with a trigger can cause misclassification as different classes by different models with a 100% attack success rate on MNIST and Fashion-MNIST while maintaining the 97.18% and 91.1% accuracy, respectively, on data without a trigger.
引用
收藏
页码:883 / 887
页数:5
相关论文
共 50 条
  • [41] Multi-Targeted Adversarial Example in Evasion Attack on Deep Neural Network
    Kwon, Hyun
    Kim, Yongchul
    Park, Ki-Woong
    Yoon, Hyunsoo
    Choi, Daeseon
    IEEE ACCESS, 2018, 6 : 46084 - 46096
  • [42] Backdoor Attacks on Image Classification Models in Deep Neural Networks
    ZHANG Quanxin
    MA Wencong
    WANG Yajie
    ZHANG Yaoyuan
    SHI Zhiwei
    LI Yuanzhang
    Chinese Journal of Electronics, 2022, 31 (02) : 199 - 212
  • [43] Backdoor Mitigation in Deep Neural Networks via Strategic Retraining
    Dhonthi, Akshay
    Hahn, Ernst Moritz
    Hashemi, Vahid
    FORMAL METHODS, FM 2023, 2023, 14000 : 635 - 647
  • [44] Imperceptible and multi-channel backdoor attack
    Xue, Mingfu
    Ni, Shifeng
    Wu, Yinghao
    Zhang, Yushu
    Liu, Weiqiang
    APPLIED INTELLIGENCE, 2024, 54 (01) : 1099 - 1116
  • [45] An Imperceptible Data Augmentation Based Blackbox Clean-Label Backdoor Attack on Deep Neural Networks
    Xu, Chaohui
    Liu, Wenye
    Zheng, Yue
    Wang, Si
    Chang, Chip-Hong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2023, 70 (12) : 5011 - 5024
  • [46] Imperceptible and multi-channel backdoor attack
    Mingfu Xue
    Shifeng Ni
    Yinghao Wu
    Yushu Zhang
    Weiqiang Liu
    Applied Intelligence, 2024, 54 : 1099 - 1116
  • [47] Scalable Backdoor Detection in Neural Networks
    Harikumar, Haripriya
    Le, Vuong
    Rana, Santu
    Bhattacharya, Sourangshu
    Gupta, Sunil
    Venkatesh, Svetha
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2020, PT II, 2021, 12458 : 289 - 304
  • [48] Composite Backdoor Attack for Deep Neural Network by Mixing Existing Benign Features
    Lin, Junyu
    Xu, Lei
    Liu, Yingqi
    Zhang, Xiangyu
    CCS '20: PROCEEDINGS OF THE 2020 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2020, : 113 - 131
  • [49] Backdoor Attacks to Graph Neural Networks
    Zhang, Zaixi
    Jia, Jinyuan
    Wang, Binghui
    Gong, Neil Zhenqiang
    PROCEEDINGS OF THE 26TH ACM SYMPOSIUM ON ACCESS CONTROL MODELS AND TECHNOLOGIES, SACMAT 2021, 2021, : 15 - 26
  • [50] Invisible Backdoor Attacks on Deep Neural Networks Via Steganography and Regularization
    Li, Shaofeng
    Xue, Minhui
    Zhao, Benjamin
    Zhu, Haojin
    Zhang, Xinpeng
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2021, 18 (05) : 2088 - 2105