DeepShuffle: A Lightweight Defense Framework against Adversarial Fault Injection Attacks on Deep Neural Networks in Multi-Tenant Cloud-FPGA

被引:1
|
作者
Luo, Yukui [1 ]
Rakin, Adnan Siraj [2 ]
Fan, Deliang [3 ]
Xu, Xiaolin [1 ]
机构
[1] Northeastern Univ, Boston, MA 02115 USA
[2] Binghamton Univ, Binghamton, NY 13902 USA
[3] Johns Hopkins Univ, Baltimore, MD USA
基金
美国国家科学基金会;
关键词
Deep Neural Network; Security; Defense; Multi-tenant Cloud-FPGA;
D O I
10.1109/SP54263.2024.00034
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
FPGA virtualization has garnered significant industry and academic interests as it aims to enable multi-tenant cloud systems that can accommodate multiple users' circuits on a single FPGA. Although this approach greatly enhances the efficiency of hardware resource utilization, it also introduces new security concerns. As a representative study, one state-of-the-art (SOTA) adversarial fault injection attack, named Deep-Dup [1], exemplifies the vulnerabilities of off-chip data communication within the multi-tenant cloud-FPGA system. Deep-Dup attacks successfully demonstrate the complete failure of a wide range of Deep Neural Networks (DNNs) in a black-box setup, by only injecting fault to extremely small amounts of sensitive weight data transmissions, which are identified through a powerful differential evolution searching algorithm. Such emerging adversarial fault injection attack reveals the urgency of effective defense methodology to protect DNN applications on the multi-tenant cloud-FPGA system. This paper, for the first time, presents a novel moving-target-defense (MTD) oriented defense framework DeepShuffle, which could effectively protect DNNs on multi-tenant cloud-FPGA against the SOTA Deep-Dup attack, through a novel lightweight model parameter shuffling methodology. DeepShuffle effectively counters the Deep-Dup attack by altering the weight transmission sequence, which effectively prevents adversaries from identifying security-critical model parameters from the repeatability of weight transmission during each inference round. Importantly, DeepShuffle represents a training-free DNN defense methodology, which makes constructive use of the typologies of DNN architectures to achieve being lightweight. Moreover, the deployment of DeepShuffle neither requires any hardware modification nor suffers from any performance degradation. We evaluate DeepShuffle on the SOTA opensource FPGA-DNN accelerator, Vertical Tensor Accelerator (VTA), which represents the practice of real-world FPGA-DNN system developers. We then evaluate the performance overhead of DeepShuffle and find it only consumes an additional similar to 3% of the inference time compared to the unprotected baseline. DeepShuffle improves the robustness of various SOTA DNN architectures like VGG, ResNet, etc. against Deep-Dup by orders. It effectively reduces the efficacy of evolution searching-based adversarial fault injection attack close to random fault injection attack, e.g., on VGG-11, even after increasing the attacker's effort by 2.3x, our defense shows a similar to 93% improvement in accuracy, compared to the unprotected baseline.
引用
收藏
页码:3293 / 3310
页数:18
相关论文
共 10 条
  • [1] A Quantitative Defense Framework against Power Attacks on Multi-tenant FPGA
    Luo, Yukui
    Xu, Xiaolin
    2020 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER AIDED-DESIGN (ICCAD), 2020,
  • [2] Accelerating Hybrid Quantized Neural Networks on Multi-tenant Cloud FPGA
    Kwadjo, Danielle Tchuinkou
    Tchinda, Erman Nghonda
    Mbongue, Joel Mandebi
    Bobda, Christophe
    2022 IEEE 40TH INTERNATIONAL CONFERENCE ON COMPUTER DESIGN (ICCD 2022), 2022, : 491 - 498
  • [3] Deep-Dup: An Adversarial Weight Duplication Attack Framework to Crush Deep Neural Network in Multi-Tenant FPGA
    Rakin, Adnan Siraj
    Luo, Yukui
    Xu, Xiaolin
    Fan, Deliang
    PROCEEDINGS OF THE 30TH USENIX SECURITY SYMPOSIUM, 2021, : 1919 - 1936
  • [4] Deep-Dup: An adversarial weight duplication attack framework to crush deep neural network in multi-tenant FPGA
    Rakin, Adnan Siraj
    Luo, Yukui
    Xu, Xiaolin
    Fan, Deliang
    Proceedings of the 30th USENIX Security Symposium, 2021, : 1919 - 1936
  • [5] Watermarking-based Defense against Adversarial Attacks on Deep Neural Networks
    Li, Xiaoting
    Chen, Lingwei
    Zhang, Jinquan
    Larus, James
    Wu, Dinghao
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [6] EFFICIENT RANDOMIZED DEFENSE AGAINST ADVERSARIAL ATTACKS IN DEEP CONVOLUTIONAL NEURAL NETWORKS
    Sheikholeslami, Fatemeh
    Jain, Swayambhoo
    Giannakis, Georgios B.
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 3277 - 3281
  • [7] It's Time to Migrate! A Game-Theoretic Framework for Protecting a Multi-tenant Cloud against Collocation Attacks
    Anwar, Ahmed H.
    Atia, George
    Guirguis, Mina
    PROCEEDINGS 2018 IEEE 11TH INTERNATIONAL CONFERENCE ON CLOUD COMPUTING (CLOUD), 2018, : 725 - 731
  • [8] Complete Defense Framework to Protect Deep Neural Networks against Adversarial Examples
    Sun, Guangling
    Su, Yuying
    Qin, Chuan
    Xu, Wenbo
    Lu, Xiaofeng
    Ceglowski, Andrzej
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2020, 2020
  • [9] Jujutsu: A Two-stage Defense against Adversarial Patch Attacks on Deep Neural Networks
    Chen, Zitao
    Dash, Pritam
    Pattabiraman, Karthik
    PROCEEDINGS OF THE 2023 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, ASIA CCS 2023, 2023, : 689 - 703
  • [10] Defense-Resistant Backdoor Attacks Against Deep Neural Networks in Outsourced Cloud Environment
    Gong, Xueluan
    Chen, Yanjiao
    Wang, Qian
    Huang, Huayang
    Meng, Lingshuo
    Shen, Chao
    Zhang, Qian
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2021, 39 (08) : 2617 - 2631