Defense-Resistant Backdoor Attacks Against Deep Neural Networks in Outsourced Cloud Environment

被引:22
|
作者
Gong, Xueluan [1 ]
Chen, Yanjiao [1 ]
Wang, Qian [1 ]
Huang, Huayang [2 ]
Meng, Lingshuo [2 ]
Shen, Chao [3 ]
Zhang, Qian [4 ]
机构
[1] Wuhan Univ, Sch Comp Sci, Wuhan 430072, Peoples R China
[2] Wuhan Univ, Sch Cyber Sci & Engn, Wuhan 430072, Peoples R China
[3] Xi An Jiao Tong Univ, Sch Cyber Sci & Engn, Xian 710049, Peoples R China
[4] Hong Kong Univ Sci & Technol, Dept Comp Sci & Engn, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Training; Computational modeling; Biological neural networks; Neurons; Machine learning; Cloud computing; Resistance; Outsourced cloud environment; deep neural network; backdoor attacks;
D O I
10.1109/JSAC.2021.3087237
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The time and monetary costs of training sophisticated deep neural networks are exorbitant, which motivates resource-limited users to outsource the training process to the cloud. Concerning that an untrustworthy cloud service provider may inject backdoors to the returned model, the user can leverage state-of-the-art defense strategies to examine the model. In this paper, we aim to develop robust backdoor attacks (named RobNet) that can evade existing defense strategies from the standpoint of malicious cloud providers. The key rationale is to diversify the triggers and strengthen the model structure so that the backdoor is hard to be detected or removed. To attain this objective, we refine the trigger generation algorithm by selecting the neuron(s) with large weights and activations and then computing the triggers via gradient descent to maximize the value of the selected neuron(s). In stark contrast to existing works that fix the trigger location, we design a multi-location patching method to make the model less sensitive to mild displacement of triggers in real attacks. Furthermore, we extend the attack space by proposing multi-trigger backdoor attacks that can misclassify inputs with different triggers into the same or different target label(s). We evaluate the performance of RobNet on MNIST, GTSRB, and CIFAR-10 datasets, against four representative defense strategies Pruning, NeuralCleanse, Strip, and ABS. The comparison with two state-of-the-art baselines BadNets and Hidden Backdoors demonstrates that RobNet achieves higher attack success rate and is more resistant to potential defenses.
引用
收藏
页码:2617 / 2631
页数:15
相关论文
共 50 条
  • [41] Jujutsu: A Two-stage Defense against Adversarial Patch Attacks on Deep Neural Networks
    Chen, Zitao
    Dash, Pritam
    Pattabiraman, Karthik
    [J]. PROCEEDINGS OF THE 2023 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, ASIA CCS 2023, 2023, : 689 - 703
  • [42] ONION: A Simple and Effective Defense Against Textual Backdoor Attacks
    Qi, Fanchao
    Chen, Yangyi
    Li, Mukai
    Yao, Yuan
    Liu, Zhiyuan
    Sun, Maosong
    [J]. 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 9558 - 9566
  • [43] Backdoor Attacks to Deep Neural Networks: A Survey of the Literature, Challenges, and Future Research Directions
    Mengara, Orson
    Avila, Anderson
    Falk, Tiago H.
    [J]. IEEE ACCESS, 2024, 12 : 29004 - 29023
  • [44] Interpretability Derived Backdoor Attacks Detection in Deep Neural Networks: Work-in-Progress
    Wen, Xiangyu
    Jiang, Wei
    Zhan, Jinyu
    Wang, Xupeng
    He, Zhiyuan
    [J]. PROCEEDINGS OF THE 2020 INTERNATIONAL CONFERENCE ON EMBEDDED SOFTWARE (EMSOFT), 2020, : 13 - 14
  • [45] Detection of backdoor attacks using targeted universal adversarial perturbations for deep neural networks
    Qu, Yubin
    Huang, Song
    Chen, Xiang
    Wang, Xingya
    Yao, Yongming
    [J]. JOURNAL OF SYSTEMS AND SOFTWARE, 2024, 207
  • [46] A Backdoor Embedding Method for Backdoor Detection in Deep Neural Networks
    Liu, Meirong
    Zheng, Hong
    Liu, Qin
    Xing, Xiaofei
    Dai, Yinglong
    [J]. UBIQUITOUS SECURITY, 2022, 1557 : 1 - 12
  • [47] Backdoor Attacks on Deep Neural Networks via Transfer Learning from Natural Images
    Matsuo, Yuki
    Takemoto, Kazuhiro
    [J]. APPLIED SCIENCES-BASEL, 2022, 12 (24):
  • [48] Combining Defences Against Data-Poisoning Based Backdoor Attacks on Neural Networks
    Milakovic, Andrea
    Mayer, Rudolf
    [J]. DATA AND APPLICATIONS SECURITY AND PRIVACY XXXVI, DBSEC 2022, 2022, 13383 : 28 - 47
  • [49] Watermarking Graph Neural Networks based on Backdoor Attacks
    Xu, Jing
    Koffas, Stefanos
    Ersoy, Oguzhan
    Picek, Stjepan
    [J]. 2023 IEEE 8TH EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY, EUROS&P, 2023, : 1179 - 1197
  • [50] Defending Against Adversarial Attacks in Deep Neural Networks
    You, Suya
    Kuo, C-C Jay
    [J]. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006