Untargeted Backdoor Attack Against Deep Neural Networks With Imperceptible Trigger

被引:0
|
作者
Xue, Mingfu [1 ]
Wu, Yinghao [1 ]
Ni, Shifeng [1 ]
Zhang, Leo Yu [2 ]
Zhang, Yushu [1 ]
Liu, Weiqiang [3 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing 211106, Peoples R China
[2] Griffith Univ, Sch Informat & Commun Technol, Nathan, Qld 4111, Australia
[3] Nanjing Univ Aeronaut & Astronaut, Coll Elect & Informat Engn, Nanjing 211106, Peoples R China
基金
中国国家自然科学基金;
关键词
Training; Predictive models; Artificial neural networks; Entropy; Aerospace electronics; Informatics; Force; Autoencoder; deep neural networks (DNNs); imperceptible trigger; trustworthy artificial intelligence; untargeted backdoor attack (UBA);
D O I
10.1109/TII.2023.3329641
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recent research works have demonstrated that deep neural networks (DNNs) are vulnerable to backdoor attacks. The existing backdoor attacks can only cause targeted misclassification on backdoor instances, which makes them can be easily detected by defense methods. In this article, we propose an untargeted backdoor attack (UBA) against DNNs, where the backdoor instances are randomly misclassified by the backdoored model to any incorrect label. To achieve the goal of UBA, we propose to utilize autoencoder as the trigger generation model and train the target model and the autoencoder simultaneously. We also propose a special loss function (Evasion Loss) to train the autoencoder and the target model, in order to make the target model predict backdoor instances as random incorrect classes. During the inference stage, the trained autoencoder is used to generate backdoor instances. For different backdoor instances, the generated triggers are different and the corresponding predicted labels are random incorrect labels. Experimental results demonstrate that the proposed UBA is effective. On the ResNet-18 model, the attack success rate (ASR) of the proposed UBA is 96.48%, 91.27%, and 90.83% on CIFAR-10, GTSRB, and ImageNet datasets, respectively. On the VGG-16 model, the ASR of the proposed UBA is 89.72% and 97.78% on CIFAR-10 and ImageNet datasets, respectively. Moreover, the proposed UBA is robust against existing backdoor defense methods, which are designed to detect targeted backdoor attacks. We hope this article can promote the research of corresponding backdoor defense works.
引用
收藏
页码:5004 / 5013
页数:10
相关论文
共 50 条
  • [1] Adaptive Backdoor Attack against Deep Neural Networks
    He, Honglu
    Zhu, Zhiying
    Zhang, Xinpeng
    [J]. CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2023, 136 (03): : 2617 - 2633
  • [2] SGBA: A stealthy scapegoat backdoor attack against deep neural networks
    He, Ying
    Shen, Zhili
    Xia, Chang
    Hua, Jingyu
    Tong, Wei
    Zhong, Sheng
    [J]. COMPUTERS & SECURITY, 2024, 136
  • [3] Compression-resistant backdoor attack against deep neural networks
    Mingfu Xue
    Xin Wang
    Shichang Sun
    Yushu Zhang
    Jian Wang
    Weiqiang Liu
    [J]. Applied Intelligence, 2023, 53 : 20402 - 20417
  • [4] Compression-resistant backdoor attack against deep neural networks
    Xue, Mingfu
    Wang, Xin
    Sun, Shichang
    Zhang, Yushu
    Wang, Jian
    Liu, Weiqiang
    [J]. APPLIED INTELLIGENCE, 2023, 53 (17) : 20402 - 20417
  • [5] An Imperceptible Data Augmentation Based Blackbox Clean-Label Backdoor Attack on Deep Neural Networks
    Xu, Chaohui
    Liu, Wenye
    Zheng, Yue
    Wang, Si
    Chang, Chip-Hong
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2023, 70 (12) : 5011 - 5024
  • [6] Sparse Backdoor Attack Against Neural Networks
    Zhong, Nan
    Qian, Zhenxing
    Zhang, Xinpeng
    [J]. Computer Journal, 2024, 67 (05): : 1783 - 1793
  • [7] Sparse Backdoor Attack Against Neural Networks
    Zhong, Nan
    Qian, Zhenxing
    Zhang, Xinpeng
    [J]. COMPUTER JOURNAL, 2023, 67 (05): : 1783 - 1793
  • [8] PatchBackdoor: Backdoor Attack against Deep Neural Networks without Model Modification
    Yuan, Yizhen
    Kong, Rui
    Xie, Shenghao
    Li, Yuanchun
    Liu, Yunxin
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 9134 - 9142
  • [9] Backdoor Attack on Deep Neural Networks in Perception Domain
    Mo, Xiaoxing
    Zhang, Leo Yu
    Sun, Nan
    Luo, Wei
    Gao, Shang
    [J]. 2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [10] Hibernated Backdoor: A Mutual Information Empowered Backdoor Attack to Deep Neural Networks
    Ning, Rui
    Li, Jiang
    Xin, Chunsheng
    Wu, Hongyi
    Wang, Chonggang
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 10309 - 10318