The Devil Is in the GAN: Backdoor Attacks and Defenses in Deep Generative Models

被引:3
|
作者
Rawat, Ambrish [1 ]
Levacher, Killian [1 ]
Sinn, Mathieu [2 ]
机构
[1] IBM Res Europe, Dublin, Ireland
[2] Amazon Dev Ctr, Berlin, Germany
来源
基金
欧盟地平线“2020”;
关键词
D O I
10.1007/978-3-031-17143-7_41
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep Generative Models (DGMs) are a popular class of models which find widespread use because of their ability to synthesise data from complex, high-dimensional manifolds. However, even with their increasing industrial adoption, they have not been subject to rigorous security analysis. In this work we examine backdoor attacks on DGMs which can significantly limit their applicability within a model supply chain and cause massive reputation damage for companies outsourcing DGMs form third parties. DGMs are vastly different from their discriminative counterparts and manifestation of attacks in DGMs is largely understudied. To this end we propose three novel training-time backdoor attacks which require modest computation effort but are highly effective. Furthermore, we demonstrate their effectiveness on large-scale industry-grade models across two different domains - images (StyleGAN) and audio (WaveGAN). Finally, we present an insightful discussion and prescribe a practical and comprehensive defense strategy for safe usage of DGMs.
引用
收藏
页码:776 / 783
页数:8
相关论文
共 50 条
  • [1] On Model Outsourcing Adaptive Attacks to Deep Learning Backdoor Defenses
    Peng, Huaibing
    Qiu, Huming
    Ma, Hua
    Wang, Shuo
    Fu, Anmin
    Al-Sarawi, Said F.
    Abbott, Derek
    Gao, Yansong
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 2356 - 2369
  • [2] Backdoor Attacks and Defenses for Deep Neural Networks in Outsourced Cloud Environments
    Chen, Yanjiao
    Gong, Xueluan
    Wang, Qian
    Di, Xing
    Huang, Huayang
    [J]. IEEE NETWORK, 2020, 34 (05): : 141 - 147
  • [3] Backdoor Pony: Evaluating backdoor attacks and defenses in different domains
    Mercier, Arthur
    Smolin, Nikita
    Sihlovec, Oliver
    Koffas, Stefanos
    Picek, Stjepan
    [J]. SOFTWAREX, 2023, 22
  • [4] Backdoor Attacks to Deep Learning Models and Countermeasures: A Survey
    Li, Yudong
    Zhang, Shigeng
    Wang, Weiping
    Song, Hong
    [J]. IEEE OPEN JOURNAL OF THE COMPUTER SOCIETY, 2023, 4 : 134 - 146
  • [5] Adversarial Attacks and Defenses for Deep Learning Models
    Li, Minghui
    Jiang, Peipei
    Wang, Qian
    Shen, Chao
    Li, Qi
    [J]. Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2021, 58 (05): : 909 - 926
  • [6] An Investigation of Recent Backdoor Attacks and Defenses in Federated Learning
    Chen, Qiuxian
    Tao, Yizheng
    [J]. 2023 EIGHTH INTERNATIONAL CONFERENCE ON FOG AND MOBILE EDGE COMPUTING, FMEC, 2023, : 262 - 269
  • [7] Backdoor Attacks on Image Classification Models in Deep Neural Networks
    Zhang, Quanxin
    Ma, Wencong
    Wang, Yajie
    Zhang, Yaoyuan
    Shi, Zhiwei
    Li, Yuanzhang
    [J]. CHINESE JOURNAL OF ELECTRONICS, 2022, 31 (02) : 199 - 212
  • [8] Backdoor Attacks on Image Classification Models in Deep Neural Networks
    ZHANG Quanxin
    MA Wencong
    WANG Yajie
    ZHANG Yaoyuan
    SHI Zhiwei
    LI Yuanzhang
    [J]. Chinese Journal of Electronics, 2022, 31 (02) : 199 - 212
  • [9] Backdoor Attacks on Time Series: A Generative Approach
    Jiang, Yujing
    Ma, Xingjun
    Erfani, Sarah Monazam
    Bailey, James
    [J]. 2023 IEEE CONFERENCE ON SECURE AND TRUSTWORTHY MACHINE LEARNING, SATML, 2023, : 392 - 403
  • [10] A Comprehensive Survey on Backdoor Attacks and Their Defenses in Face Recognition Systems
    Le Roux, Quentin
    Bourbao, Eric
    Teglia, Yannick
    Kallas, Kassem
    [J]. IEEE ACCESS, 2024, 12 : 47433 - 47468