A Methodology for Formalizing Model-Inversion Attacks

被引:75
|
作者
Wu, Xi [1 ]
Fredrikson, Matthew [2 ]
Jha, Somesh [1 ]
Naughton, Jeffrey F. [1 ]
机构
[1] Univ Wisconsin Madison, Madison, WI 53706 USA
[2] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
关键词
BOUNDS;
D O I
10.1109/CSF.2016.32
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
confidentiality of training data induced by releasing machine-learning models, and has recently received increasing attention. Motivated by existing MI attacks and other previous attacks that turn out to be MI "in disguise,"this paper initiates a formal study of MI attacks by presenting a game-based methodology. Our methodology uncovers a number of subtle issues, and devising a rigorous game-based definition, analogous to those in cryptography, is an interesting avenue for future work. We describe methodologies for two types of attacks. The first is for black-box attacks, which consider an adversary who infers sensitive values with only oracle access to a model. The second methodology targets the white-box scenario where an adversary has some additional knowledge about the structure of a model. For the restricted class of Boolean models and black-box attacks, we characterize model invertibility using the concept of influence from Boolean analysis in the noiseless case, and connect model invertibility with stable influence in the noisy case. Interestingly, we also discovered an intriguing phenomenon, which we call "invertibility interference," where a highly invertible model quickly becomes highly non-invertible by adding little noise. For the white-box case, we consider a common phenomenon in machine-learning models where the model is a sequential composition of several sub-models. We show, quantitatively, that even very restricted communication between layers could leak a significant amount of information. Perhaps more importantly, our study also unveils unexpected computational power of these restricted communication channels, which, to the best of our knowledge, were not previously known.
引用
收藏
页码:355 / 370
页数:16
相关论文
共 50 条
  • [1] Bilateral Dependency Optimization: Defending Against Model-inversion Attacks
    Peng, Xiong
    Liu, Feng
    Zhang, Jingfeng
    Lan, Long
    Ye, Junjie
    Liu, Tongliang
    Han, Bo
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 1358 - 1367
  • [2] The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks
    Zhang, Yuheng
    Jia, Ruoxi
    Pei, Hengzhi
    Wang, Wenxiao
    Li, Bo
    Song, Dawn
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 250 - 258
  • [3] MODEL-INVERSION FEEDFORWARD CONTROL FOR WAVE LOAD REDUCTION IN FLOATING WIND TURBINES
    Fontanella, Alessandro
    Belloli, Marco
    PROCEEDINGS OF ASME 2021 40TH INTERNATIONAL CONFERENCE ON OCEAN, OFFSHORE AND ARCTIC ENGINEERING (OMAE2021), VOL 9, 2021,
  • [4] Assessing canopy biomass and vigor by model-inversion of bidirectional reflectances: Problems and prospects
    Brakke, TW
    Otterman, J
    Irons, JR
    Hall, FG
    IGARSS '96 - 1996 INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM: REMOTE SENSING FOR A SUSTAINABLE FUTURE, VOLS I - IV, 1996, : 1657 - 1659
  • [5] Variational Model Inversion Attacks
    Wang, Kuan-Chieh
    Fu, Yan
    Li, Ke
    Khisti, Ashish
    Zemel, Richard
    Makhzani, Alireza
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,
  • [6] Exploiting Explanations for Model Inversion Attacks
    Zhao, Xuejun
    Zhang, Wencan
    Xiao, Xiaokui
    Lim, Brian
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 662 - 672
  • [7] Boosting Model Inversion Attacks With Adversarial Examples
    Zhou, Shuai
    Zhu, Tianqing
    Ye, Dayong
    Yu, Xin
    Zhou, Wanlei
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (03) : 1451 - 1468
  • [8] Robust Transparency Against Model Inversion Attacks
    Alufaisan, Yasmeen
    Kantarcioglu, Murat
    Zhou, Yan
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2021, 18 (05) : 2061 - 2073
  • [9] Model Inversion Attacks Against Collaborative Inference
    He, Zecheng
    Zhang, Tianwei
    Lee, Ruby B.
    35TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE (ACSA), 2019, : 148 - 162
  • [10] Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks
    Struppek, Lukas
    Hintersdorf, Dominik
    Correia, Antonio De Almeida
    Adler, Antonia
    Kersting, Kristian
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,