The Devil Is in the GAN: Backdoor Attacks and Defenses in Deep Generative Models

被引:3
|
作者
Rawat, Ambrish [1 ]
Levacher, Killian [1 ]
Sinn, Mathieu [2 ]
机构
[1] IBM Res Europe, Dublin, Ireland
[2] Amazon Dev Ctr, Berlin, Germany
来源
基金
欧盟地平线“2020”;
关键词
D O I
10.1007/978-3-031-17143-7_41
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep Generative Models (DGMs) are a popular class of models which find widespread use because of their ability to synthesise data from complex, high-dimensional manifolds. However, even with their increasing industrial adoption, they have not been subject to rigorous security analysis. In this work we examine backdoor attacks on DGMs which can significantly limit their applicability within a model supply chain and cause massive reputation damage for companies outsourcing DGMs form third parties. DGMs are vastly different from their discriminative counterparts and manifestation of attacks in DGMs is largely understudied. To this end we propose three novel training-time backdoor attacks which require modest computation effort but are highly effective. Furthermore, we demonstrate their effectiveness on large-scale industry-grade models across two different domains - images (StyleGAN) and audio (WaveGAN). Finally, we present an insightful discussion and prescribe a practical and comprehensive defense strategy for safe usage of DGMs.
引用
收藏
页码:776 / 783
页数:8
相关论文
共 50 条
  • [21] On the Neural Backdoor of Federated Generative Models in Edge Computing
    Wang, Derui
    Wen, Sheng
    Jolfaei, Alireza
    Haghighi, Mohammad Sayad
    Nepal, Surya
    Xiang, Yang
    [J]. ACM TRANSACTIONS ON INTERNET TECHNOLOGY, 2022, 22 (02)
  • [22] Performing Co-Membership Attacks Against Deep Generative Models
    Liu, Kin Sum
    Xiao, Chaowei
    Li, Bo
    Gao, Jie
    [J]. 2019 19TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2019), 2019, : 459 - 467
  • [23] Backdoor Attack is a Devil in Federated GAN-Based Medical Image Synthesis
    Jin, Ruinan
    Li, Xiaoxiao
    [J]. SIMULATION AND SYNTHESIS IN MEDICAL IMAGING, SASHIMI 2022, 2022, 13570 : 154 - 165
  • [24] TrojDRL: Evaluation of Backdoor Attacks on Deep Reinforcement Learning
    Kiourti, Panagiota
    Wardega, Kacper
    Jha, Susmit
    Li, Wenchao
    [J]. PROCEEDINGS OF THE 2020 57TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2020,
  • [25] Aliasing Backdoor Attacks on Pre-trained Models
    Wei, Cheng'an
    Lee, Yeonjoon
    Chen, Kai
    Meng, Guozhu
    Lv, Peizhuo
    [J]. PROCEEDINGS OF THE 32ND USENIX SECURITY SYMPOSIUM, 2023, : 2707 - 2724
  • [26] A collaborative deep learning microservice for backdoor defenses in Industrial IoT networks
    Liu, Qin
    Chen, Liqiong
    Jiang, Hongbo
    Wu, Jie
    Wang, Tian
    Peng, Tao
    Wang, Guojun
    [J]. AD HOC NETWORKS, 2022, 124
  • [27] Backdoor Attacks and Defenses in Federated Learning: State-of-the-Art, Taxonomy, and Future Directions
    Gong, Xueluan
    Chen, Yanjiao
    Wang, Qian
    Kong, Weihan
    [J]. IEEE WIRELESS COMMUNICATIONS, 2023, 30 (02) : 114 - 121
  • [28] Dynamic Backdoor Attacks Against Machine Learning Models
    Salem, Ahmed
    Wen, Rui
    Backes, Michael
    Ma, Shiqing
    Zhang, Yang
    [J]. 2022 IEEE 7TH EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY (EUROS&P 2022), 2022, : 703 - 718
  • [29] Adversarial Examples: Attacks and Defenses for Deep Learning
    Yu, Xiaoyong
    He, Pan
    Zhu, Qile
    Li, Xiaolin
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (09) : 2805 - 2824
  • [30] A Survey of Attacks and Defenses for Deep Neural Networks
    Machooka, Daniel
    Yuan, Xiaohong
    Esterline, Albert
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON CYBER SECURITY AND RESILIENCE, CSR, 2023, : 254 - 261