Compression-resistant backdoor attack against deep neural networks

被引:0
|
作者
Mingfu Xue
Xin Wang
Shichang Sun
Yushu Zhang
Jian Wang
Weiqiang Liu
机构
[1] Nanjing University of Aeronautics and Astronautics,College of Computer Science and Technology
[2] Nanjing University of Aeronautics and Astronautics,College of Electronic and Information Engineering
来源
Applied Intelligence | 2023年 / 53卷
关键词
Artificial intelligence security; Backdoor attack; Compression resistance; Deep neural networks; Feature consistency training;
D O I
暂无
中图分类号
学科分类号
摘要
In recent years, a number of backdoor attacks against deep neural networks (DNN) have been proposed. In this paper, we reveal that backdoor attacks are vulnerable to image compressions, as backdoor instances used to trigger backdoor attacks are usually compressed by image compression methods during data transmission. When backdoor instances are compressed, the feature of backdoor trigger will be destroyed, which could result in significant performance degradation for backdoor attacks. As a countermeasure, we propose the first compression-resistant backdoor attack method based on feature consistency training. Specifically, both backdoor images and their compressed versions are used for training, and the feature difference between backdoor images and their compressed versions are minimized through feature consistency training. As a result, the DNN treats the feature of compressed images as the feature of backdoor images in feature space. After training, the backdoor attack will be robust to image compressions. Furthermore, we consider three different image compressions (i.e., JPEG, JPEG2000, WEBP) during the feature consistency training, so that the backdoor attack can be robust to multiple image compression algorithms. Experimental results demonstrate that when the backdoor instances are compressed, the attack success rate of common backdoor attack is 6.63% (JPEG), 6.20% (JPEG2000) and 3.97% (WEBP) respectively, while the attack success rate of the proposed compression-resistant backdoor attack is 98.77% (JPEG), 97.69% (JPEG2000), and 98.93% (WEBP) respectively. The compression-resistant attack is robust under various parameters settings. In addition, extensive experiments have demonstrated that even if only one image compression method is used in the feature consistency training process, the proposed compression-resistant backdoor attack has the generalization ability to resist multiple unseen image compression methods.
引用
收藏
页码:20402 / 20417
页数:15
相关论文
共 50 条
  • [1] Compression-resistant backdoor attack against deep neural networks
    Xue, Mingfu
    Wang, Xin
    Sun, Shichang
    Zhang, Yushu
    Wang, Jian
    Liu, Weiqiang
    [J]. APPLIED INTELLIGENCE, 2023, 53 (17) : 20402 - 20417
  • [2] Adaptive Backdoor Attack against Deep Neural Networks
    He, Honglu
    Zhu, Zhiying
    Zhang, Xinpeng
    [J]. CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2023, 136 (03): : 2617 - 2633
  • [3] SGBA: A stealthy scapegoat backdoor attack against deep neural networks
    He, Ying
    Shen, Zhili
    Xia, Chang
    Hua, Jingyu
    Tong, Wei
    Zhong, Sheng
    [J]. COMPUTERS & SECURITY, 2024, 136
  • [4] Untargeted Backdoor Attack Against Deep Neural Networks With Imperceptible Trigger
    Xue, Mingfu
    Wu, Yinghao
    Ni, Shifeng
    Zhang, Leo Yu
    Zhang, Yushu
    Liu, Weiqiang
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (03) : 5004 - 5013
  • [5] Sparse Backdoor Attack Against Neural Networks
    Zhong, Nan
    Qian, Zhenxing
    Zhang, Xinpeng
    [J]. COMPUTER JOURNAL, 2023, 67 (05): : 1783 - 1793
  • [6] PatchBackdoor: Backdoor Attack against Deep Neural Networks without Model Modification
    Yuan, Yizhen
    Kong, Rui
    Xie, Shenghao
    Li, Yuanchun
    Liu, Yunxin
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 9134 - 9142
  • [7] Backdoor Attack on Deep Neural Networks in Perception Domain
    Mo, Xiaoxing
    Zhang, Leo Yu
    Sun, Nan
    Luo, Wei
    Gao, Shang
    [J]. 2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [8] Hibernated Backdoor: A Mutual Information Empowered Backdoor Attack to Deep Neural Networks
    Ning, Rui
    Li, Jiang
    Xin, Chunsheng
    Wu, Hongyi
    Wang, Chonggang
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 10309 - 10318
  • [9] A backdoor attack against quantum neural networks with limited information
    Huang, Chen-Yi
    Zhang, Shi-Bin
    [J]. CHINESE PHYSICS B, 2023, 32 (10)
  • [10] Universal backdoor attack on deep neural networks for malware detection
    Zhang, Yunchun
    Feng, Fan
    Liao, Zikun
    Li, Zixuan
    Yao, Shaowen
    [J]. APPLIED SOFT COMPUTING, 2023, 143