Untargeted Backdoor Attack Against Deep Neural Networks With Imperceptible Trigger

被引:0
|
作者
Xue, Mingfu [1 ]
Wu, Yinghao [1 ]
Ni, Shifeng [1 ]
Zhang, Leo Yu [2 ]
Zhang, Yushu [1 ]
Liu, Weiqiang [3 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing 211106, Peoples R China
[2] Griffith Univ, Sch Informat & Commun Technol, Nathan, Qld 4111, Australia
[3] Nanjing Univ Aeronaut & Astronaut, Coll Elect & Informat Engn, Nanjing 211106, Peoples R China
基金
中国国家自然科学基金;
关键词
Training; Predictive models; Artificial neural networks; Entropy; Aerospace electronics; Informatics; Force; Autoencoder; deep neural networks (DNNs); imperceptible trigger; trustworthy artificial intelligence; untargeted backdoor attack (UBA);
D O I
10.1109/TII.2023.3329641
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent research works have demonstrated that deep neural networks (DNNs) are vulnerable to backdoor attacks. The existing backdoor attacks can only cause targeted misclassification on backdoor instances, which makes them can be easily detected by defense methods. In this article, we propose an untargeted backdoor attack (UBA) against DNNs, where the backdoor instances are randomly misclassified by the backdoored model to any incorrect label. To achieve the goal of UBA, we propose to utilize autoencoder as the trigger generation model and train the target model and the autoencoder simultaneously. We also propose a special loss function (Evasion Loss) to train the autoencoder and the target model, in order to make the target model predict backdoor instances as random incorrect classes. During the inference stage, the trained autoencoder is used to generate backdoor instances. For different backdoor instances, the generated triggers are different and the corresponding predicted labels are random incorrect labels. Experimental results demonstrate that the proposed UBA is effective. On the ResNet-18 model, the attack success rate (ASR) of the proposed UBA is 96.48%, 91.27%, and 90.83% on CIFAR-10, GTSRB, and ImageNet datasets, respectively. On the VGG-16 model, the ASR of the proposed UBA is 89.72% and 97.78% on CIFAR-10 and ImageNet datasets, respectively. Moreover, the proposed UBA is robust against existing backdoor defense methods, which are designed to detect targeted backdoor attacks. We hope this article can promote the research of corresponding backdoor defense works.
引用
收藏
页码:5004 / 5013
页数:10
相关论文
共 50 条
  • [41] Kaleidoscope: Physical Backdoor Attacks Against Deep Neural Networks With RGB Filters
    Gong, Xueluan
    Wang, Ziyao
    Chen, Yanjiao
    Xue, Meng
    Wang, Qian
    Shen, Chao
    [J]. IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (06) : 4993 - 5004
  • [42] Robust Backdoor Attacks against Deep Neural Networks in Real Physical World
    Xue, Mingfu
    He, Can
    Sun, Shichang
    Wang, Jian
    Liu, Weiqiang
    [J]. 2021 IEEE 20TH INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS (TRUSTCOM 2021), 2021, : 620 - 626
  • [43] Latent Space-Based Backdoor Attacks Against Deep Neural Networks
    Kristanto, Adrian
    Wang, Shuo
    Rudolph, Carsten
    [J]. 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [44] Latent Backdoor Attacks on Deep Neural Networks
    Yao, Yuanshun
    Li, Huiying
    Zheng, Haitao
    Zhao, Ben Y.
    [J]. PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 2041 - 2055
  • [45] DeepGuard: Backdoor Attack Detection and Identification Schemes in Privacy-Preserving Deep Neural Networks
    Chen, Congcong
    Wei, Lifei
    Zhang, Lei
    Peng, Ya
    Ning, Jianting
    [J]. SECURITY AND COMMUNICATION NETWORKS, 2022, 2022
  • [46] TensorClog: An Imperceptible Poisoning Attack on Deep Neural Network Applications
    Shen, Juncheng
    Zhu, Xiaolei
    Ma, De
    [J]. IEEE ACCESS, 2019, 7 : 41498 - 41506
  • [47] Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs
    Zheng, Haibin
    Xiong, Haiyang
    Chen, Jinyin
    Ma, Haonan
    Huang, Guohan
    [J]. IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024, 11 (02) : 2479 - 2493
  • [48] Verifying Neural Networks Against Backdoor Attacks
    Pham, Long H.
    Sun, Jun
    [J]. COMPUTER AIDED VERIFICATION (CAV 2022), PT I, 2022, 13371 : 171 - 192
  • [49] Effective Backdoor Attack on Graph Neural Networks in Spectral Domain
    Zhao, Xiangyu
    Wu, Hanzhou
    Zhang, Xinpeng
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (07) : 12102 - 12114
  • [50] PTB: Robust physical backdoor attacks against deep neural networks in real world
    Xue, Mingfu
    He, Can
    Wu, Yinghao
    Sun, Shichang
    Zhang, Yushu
    Wang, Jian
    Liu, Weiqiang
    [J]. COMPUTERS & SECURITY, 2022, 118