Inconspicuous Data Augmentation Based Backdoor Attack on Deep Neural Networks

被引:2
|
作者
Xu, Chaohui [1 ]
Liu, Wenye [1 ]
Zheng, Yue [1 ]
Wang, Si [1 ]
Chang, Chip-Hong [1 ]
机构
[1] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore, Singapore
基金
新加坡国家研究基金会;
关键词
D O I
10.1109/SOCC56010.2022.9908113
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
With new applications made possible by the fusion of edge computing and artificial intelligence (AI) technologies, the global market capitalization of edge AI has risen tremendously in recent years. Deployment of pre-trained deep neural network (DNN) models on edge computing platforms, however, does not alleviate the fundamental trust assurance issue arising from the lack of interpretability of end-to-end DNN solutions. The most notorious threat of DNNs is the backdoor attack. Most backdoor attacks require a relatively large injection rate (approximate to 10%) to achieve a high attack success rate. The trigger patterns are not always stealthy and can be easily detected or removed by backdoor detectors. Moreover, these attacks are only tested on DNN models implemented on general-purpose computing platforms. This paper proposes to use data augmentation for backdoor attacks to increase the stealth, attack success rate, and robustness. Different data augmentation techniques are applied independently on three color channels to embed a composite trigger. The data augmentation strength is tuned based on the Gradient Magnitude Similarity Deviation, which is used to objectively assess the visual imperceptibility of the poisoned samples. A rich set of composite triggers can be created for different dirty labels. The proposed attacks are evaluated on pre-activation ResNet18 trained with CIFAR-10 and GTSRB datasets, and EfficientNet-B0 trained with adapted 10-class ImageNet dataset. A high attack success rate of above 97% with only 1% injection rate is achieved on these DNN models implemented on both general-purpose computing platforms and Intel Neural Compute Stick 2 edge AI device. The accuracy loss of the poisoned DNNs on benign inputs is kept below 0.6%. The proposed attack is also tested to be resilient to state-of-the-art backdoor defense methods.
引用
收藏
页码:237 / 242
页数:6
相关论文
共 50 条
  • [1] An Imperceptible Data Augmentation Based Blackbox Clean-Label Backdoor Attack on Deep Neural Networks
    Xu, Chaohui
    Liu, Wenye
    Zheng, Yue
    Wang, Si
    Chang, Chip-Hong
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2023, 70 (12) : 5011 - 5024
  • [2] Backdoor Attack on Deep Neural Networks in Perception Domain
    Mo, Xiaoxing
    Zhang, Leo Yu
    Sun, Nan
    Luo, Wei
    Gao, Shang
    [J]. 2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [3] Adaptive Backdoor Attack against Deep Neural Networks
    He, Honglu
    Zhu, Zhiying
    Zhang, Xinpeng
    [J]. CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2023, 136 (03): : 2617 - 2633
  • [4] Hibernated Backdoor: A Mutual Information Empowered Backdoor Attack to Deep Neural Networks
    Ning, Rui
    Li, Jiang
    Xin, Chunsheng
    Wu, Hongyi
    Wang, Chonggang
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 10309 - 10318
  • [5] Universal backdoor attack on deep neural networks for malware detection
    Zhang, Yunchun
    Feng, Fan
    Liao, Zikun
    Li, Zixuan
    Yao, Shaowen
    [J]. APPLIED SOFT COMPUTING, 2023, 143
  • [6] Multi-Targeted Backdoor: Indentifying Backdoor Attack for Multiple Deep Neural Networks
    Kwon, Hyun
    Yoon, Hyunsoo
    Park, Ki-Woong
    [J]. IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2020, E103D (04) : 883 - 887
  • [7] Backdoor Attacks on Graph Neural Networks Trained with Data Augmentation
    Yashiki, Shingo
    Takahashi, Chako
    Suzuki, Koutarou
    [J]. IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 2024, E107A (03) : 355 - 358
  • [8] SGBA: A stealthy scapegoat backdoor attack against deep neural networks
    He, Ying
    Shen, Zhili
    Xia, Chang
    Hua, Jingyu
    Tong, Wei
    Zhong, Sheng
    [J]. COMPUTERS & SECURITY, 2024, 136
  • [9] Compression-resistant backdoor attack against deep neural networks
    Mingfu Xue
    Xin Wang
    Shichang Sun
    Yushu Zhang
    Jian Wang
    Weiqiang Liu
    [J]. Applied Intelligence, 2023, 53 : 20402 - 20417
  • [10] Untargeted Backdoor Attack Against Deep Neural Networks With Imperceptible Trigger
    Xue, Mingfu
    Wu, Yinghao
    Ni, Shifeng
    Zhang, Leo Yu
    Zhang, Yushu
    Liu, Weiqiang
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (03) : 5004 - 5013