Multi-Targeted Backdoor: Indentifying Backdoor Attack for Multiple Deep Neural Networks

被引:20
|
作者
Kwon, Hyun [1 ,2 ]
Yoon, Hyunsoo [1 ]
Park, Ki-Woong [3 ]
机构
[1] Korea Adv Inst Sci & Technol, Sch Comp, Daejeon, South Korea
[2] Korea Mil Acad, Dept Elect Engn, Seoul, South Korea
[3] Sejong Univ, Dept Comp & Informat Secur, Seoul, South Korea
来源
基金
新加坡国家研究基金会;
关键词
machine learning; deep neural network; backdoor attack; poisoning attack; adversarial example;
D O I
10.1587/transinf.2019EDL8170
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We propose a multi-targeted backdoor that misleads different models to different classes. The method trains multiple models with data that include specific triggers that will be misclassified by different models into different classes. For example, an attacker can use a single multi-targeted backdoor sample to make model A recognize it as a stop sign, model B as a left-turn sign, model C as a right-turn sign, and model D as a U-turn sign. We used MNIST and Fashion-MNIST as experimental datasets and Tensorflow as a machine learning library. Experimental results show that the proposed method with a trigger can cause misclassification as different classes by different models with a 100% attack success rate on MNIST and Fashion-MNIST while maintaining the 97.18% and 91.1% accuracy, respectively, on data without a trigger.
引用
收藏
页码:883 / 887
页数:5
相关论文
共 50 条
  • [1] Hibernated Backdoor: A Mutual Information Empowered Backdoor Attack to Deep Neural Networks
    Ning, Rui
    Li, Jiang
    Xin, Chunsheng
    Wu, Hongyi
    Wang, Chonggang
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 10309 - 10318
  • [2] Backdoor Attack on Deep Neural Networks in Perception Domain
    Mo, Xiaoxing
    Zhang, Leo Yu
    Sun, Nan
    Luo, Wei
    Gao, Shang
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [3] Adaptive Backdoor Attack against Deep Neural Networks
    He, Honglu
    Zhu, Zhiying
    Zhang, Xinpeng
    CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2023, 136 (03): : 2617 - 2633
  • [4] Multi-Targeted Poisoning Attack in Deep Neural Networks
    Kwon H.
    Cho S.
    IEICE Transactions on Information and Systems, 2022, E105D (11): : 1916 - 1920
  • [5] A Backdoor Embedding Method for Backdoor Detection in Deep Neural Networks
    Liu, Meirong
    Zheng, Hong
    Liu, Qin
    Xing, Xiaofei
    Dai, Yinglong
    UBIQUITOUS SECURITY, 2022, 1557 : 1 - 12
  • [6] Universal backdoor attack on deep neural networks for malware detection
    Zhang, Yunchun
    Feng, Fan
    Liao, Zikun
    Li, Zixuan
    Yao, Shaowen
    APPLIED SOFT COMPUTING, 2023, 143
  • [7] Backdoor smoothing: Demystifying backdoor attacks on deep neural networks
    Grosse, Kathrin
    Lee, Taesung
    Biggio, Battista
    Park, Youngja
    Backes, Michael
    Molloy, Ian
    COMPUTERS & SECURITY, 2022, 120
  • [8] Backdoor smoothing: Demystifying backdoor attacks on deep neural networks
    Grosse, Kathrin
    Lee, Taesung
    Biggio, Battista
    Park, Youngja
    Backes, Michael
    Molloy, Ian
    Computers and Security, 2022, 120
  • [9] Compression-resistant backdoor attack against deep neural networks
    Mingfu Xue
    Xin Wang
    Shichang Sun
    Yushu Zhang
    Jian Wang
    Weiqiang Liu
    Applied Intelligence, 2023, 53 : 20402 - 20417
  • [10] SGBA: A stealthy scapegoat backdoor attack against deep neural networks
    He, Ying
    Shen, Zhili
    Xia, Chang
    Hua, Jingyu
    Tong, Wei
    Zhong, Sheng
    COMPUTERS & SECURITY, 2024, 136