Defense-Resistant Backdoor Attacks Against Deep Neural Networks in Outsourced Cloud Environment

被引:22
|
作者
Gong, Xueluan [1 ]
Chen, Yanjiao [1 ]
Wang, Qian [1 ]
Huang, Huayang [2 ]
Meng, Lingshuo [2 ]
Shen, Chao [3 ]
Zhang, Qian [4 ]
机构
[1] Wuhan Univ, Sch Comp Sci, Wuhan 430072, Peoples R China
[2] Wuhan Univ, Sch Cyber Sci & Engn, Wuhan 430072, Peoples R China
[3] Xi An Jiao Tong Univ, Sch Cyber Sci & Engn, Xian 710049, Peoples R China
[4] Hong Kong Univ Sci & Technol, Dept Comp Sci & Engn, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Training; Computational modeling; Biological neural networks; Neurons; Machine learning; Cloud computing; Resistance; Outsourced cloud environment; deep neural network; backdoor attacks;
D O I
10.1109/JSAC.2021.3087237
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The time and monetary costs of training sophisticated deep neural networks are exorbitant, which motivates resource-limited users to outsource the training process to the cloud. Concerning that an untrustworthy cloud service provider may inject backdoors to the returned model, the user can leverage state-of-the-art defense strategies to examine the model. In this paper, we aim to develop robust backdoor attacks (named RobNet) that can evade existing defense strategies from the standpoint of malicious cloud providers. The key rationale is to diversify the triggers and strengthen the model structure so that the backdoor is hard to be detected or removed. To attain this objective, we refine the trigger generation algorithm by selecting the neuron(s) with large weights and activations and then computing the triggers via gradient descent to maximize the value of the selected neuron(s). In stark contrast to existing works that fix the trigger location, we design a multi-location patching method to make the model less sensitive to mild displacement of triggers in real attacks. Furthermore, we extend the attack space by proposing multi-trigger backdoor attacks that can misclassify inputs with different triggers into the same or different target label(s). We evaluate the performance of RobNet on MNIST, GTSRB, and CIFAR-10 datasets, against four representative defense strategies Pruning, NeuralCleanse, Strip, and ABS. The comparison with two state-of-the-art baselines BadNets and Hidden Backdoors demonstrates that RobNet achieves higher attack success rate and is more resistant to potential defenses.
引用
收藏
页码:2617 / 2631
页数:15
相关论文
共 50 条
  • [1] Backdoor Attacks and Defenses for Deep Neural Networks in Outsourced Cloud Environments
    Chen, Yanjiao
    Gong, Xueluan
    Wang, Qian
    Di, Xing
    Huang, Huayang
    [J]. IEEE NETWORK, 2020, 34 (05): : 141 - 147
  • [2] A defense method against backdoor attacks on neural networks
    Kaviani, Sara
    Shamshiri, Samaneh
    Sohn, Insoo
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2023, 213
  • [3] Interpretability-Guided Defense Against Backdoor Attacks to Deep Neural Networks
    Jiang, Wei
    Wen, Xiangyu
    Zhan, Jinyu
    Wang, Xupeng
    Song, Ziwei
    [J]. IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2022, 41 (08) : 2611 - 2624
  • [4] An Overview of Backdoor Attacks Against Deep Neural Networks and Possible Defences
    Guo, Wei
    Tondi, Benedetta
    Barni, Mauro
    [J]. IEEE OPEN JOURNAL OF SIGNAL PROCESSING, 2022, 3 : 261 - 287
  • [5] Backdoor Attacks against Deep Neural Networks by Personalized Audio Steganography
    Liu, Peng
    Zhang, Shuyi
    Yao, Chuanjian
    Ye, Wenzhe
    Li, Xianxian
    [J]. 2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 68 - 74
  • [6] Backdoor smoothing: Demystifying backdoor attacks on deep neural networks
    Grosse, Kathrin
    Lee, Taesung
    Biggio, Battista
    Park, Youngja
    Backes, Michael
    Molloy, Ian
    [J]. Computers and Security, 2022, 120
  • [7] Backdoor smoothing: Demystifying backdoor attacks on deep neural networks
    Grosse, Kathrin
    Lee, Taesung
    Biggio, Battista
    Park, Youngja
    Backes, Michael
    Molloy, Ian
    [J]. COMPUTERS & SECURITY, 2022, 120
  • [8] Latent Backdoor Attacks on Deep Neural Networks
    Yao, Yuanshun
    Li, Huiying
    Zheng, Haitao
    Zhao, Ben Y.
    [J]. PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 2041 - 2055
  • [9] Verifying Neural Networks Against Backdoor Attacks
    Pham, Long H.
    Sun, Jun
    [J]. COMPUTER AIDED VERIFICATION (CAV 2022), PT I, 2022, 13371 : 171 - 192
  • [10] Compression-resistant backdoor attack against deep neural networks
    Mingfu Xue
    Xin Wang
    Shichang Sun
    Yushu Zhang
    Jian Wang
    Weiqiang Liu
    [J]. Applied Intelligence, 2023, 53 : 20402 - 20417