Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks

被引:0
|
作者
Struppek, Lukas [1 ]
Hintersdorf, Dominik [1 ]
Correia, Antonio De Almeida [1 ]
Adler, Antonia [2 ]
Kersting, Kristian [1 ,3 ,4 ]
机构
[1] Tech Univ Darmstadt, Dept Comp Sci, Darmstadt, Germany
[2] Univ Bundeswehr Munchen, Munich, Germany
[3] Tech Univ Darmstadt, Ctr Cognit Sci, Darmstadt, Germany
[4] Hessian Ctr AI Hessian AI, Darmstadt, Germany
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Model inversion attacks (MIAs) aim to create synthetic images that reflect the class-wise characteristics from a target classifier's private training data by exploiting the model's learned knowledge. Previous research has developed generative MIAs that use generative adversarial networks (GANs) as image priors tailored to a specific target model. This makes the attacks time- and resource-consuming, inflexible, and susceptible to distributional shifts between datasets. To overcome these drawbacks, we present Plug & Play Attacks, which relax the dependency between the target model and image prior, and enable the use of a single GAN to attack a wide range of targets, requiring only minor adjustments to the attack. Moreover, we show that powerful MIAs are possible even with publicly available pre-trained GANs and under strong distributional shifts, for which previous approaches fail to produce meaningful results. Our extensive evaluation confirms the improved robustness and flexibility of Plug & Play Attacks and their ability to create high-quality images revealing sensitive class characteristics.
引用
收藏
页数:24
相关论文
共 50 条
  • [1] Robust Transparency Against Model Inversion Attacks
    Alufaisan, Yasmeen
    Kantarcioglu, Murat
    Zhou, Yan
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2021, 18 (05) : 2061 - 2073
  • [2] Variational Model Inversion Attacks
    Wang, Kuan-Chieh
    Fu, Yan
    Li, Ke
    Khisti, Ashish
    Zemel, Richard
    Makhzani, Alireza
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,
  • [3] Exploiting Explanations for Model Inversion Attacks
    Zhao, Xuejun
    Zhang, Wencan
    Xiao, Xiaokui
    Lim, Brian
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 662 - 672
  • [4] A Methodology for Formalizing Model-Inversion Attacks
    Wu, Xi
    Fredrikson, Matthew
    Jha, Somesh
    Naughton, Jeffrey F.
    2016 IEEE 29TH COMPUTER SECURITY FOUNDATIONS SYMPOSIUM (CSF 2016), 2016, : 355 - 370
  • [5] Boosting Model Inversion Attacks With Adversarial Examples
    Zhou, Shuai
    Zhu, Tianqing
    Ye, Dayong
    Yu, Xin
    Zhou, Wanlei
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (03) : 1451 - 1468
  • [6] Model Inversion Attacks Against Collaborative Inference
    He, Zecheng
    Zhang, Tianwei
    Lee, Ruby B.
    35TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE (ACSA), 2019, : 148 - 162
  • [7] Towards explainable model extraction attacks
    Yan, Anli
    Hou, Ruitao
    Liu, Xiaozhang
    Yan, Hongyang
    Huang, Teng
    Wang, Xianmin
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (11) : 9936 - 9956
  • [8] MetaAdvDet: Towards Robust Detection of Evolving Adversarial Attacks
    Ma, Chen
    Zhao, Chenxu
    Shi, Hailin
    Chen, Li
    Yong, Junhai
    Zeng, Dan
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 692 - 701
  • [9] Analysis and Utilization of Hidden Information in Model Inversion Attacks
    Zhang, Zeping
    Wang, Xiaowen
    Huang, Jie
    Zhang, Shuaishuai
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 4449 - 4462
  • [10] Model Inversion Attacks Against Graph Neural Networks
    Zhang, Zaixi
    Liu, Qi
    Huang, Zhenya
    Wang, Hao
    Lee, Chee-Kong
    Chen, Enhong
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (09) : 8729 - 8741