On the (In)Feasibility of Attribute Inference Attacks on Machine Learning Models

被引:7
|
作者
Zhao, Benjamin Zi Hao [2 ,3 ]
Agrawal, Aviral [1 ,3 ,4 ]
Coburn, Catisha [5 ]
Asghar, Hassan Jameel [1 ,3 ]
Bhaskar, Raghav [3 ]
Kaafar, Mohamed Ali [1 ,3 ]
Webb, Darren [5 ]
Dickinson, Peter [5 ]
机构
[1] Macquarie Univ, N Ryde, NSW, Australia
[2] Univ New South Wales, Sydney, NSW, Australia
[3] Data61 CSIRO, Sydney, NSW, Australia
[4] BITS Pilani KK Birla Goa Campus, Sancoale, India
[5] Def Sci & Technol Grp, Cyber & Elect Warfare Div, Canberra, ACT, Australia
关键词
D O I
10.1109/EuroSP51992.2021.00025
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With an increase in low-cost machine learning APIs, advanced machine learning models may be trained on private datasets and monetized by providing them as a service. However, privacy researchers have demonstrated that these models may leak information about records in the training dataset via membership inference attacks. In this paper, we take a closer look at another inference attack reported in literature, called attribute inference, whereby an attacker tries to infer missing attributes of a partially known record used in the training dataset by accessing the machine learning model as an API. We show that even if a classification model succumbs to membership inference attacks, it is unlikely to be susceptible to attribute inference attacks. We demonstrate that this is because membership inference attacks fail to distinguish a member from a nearby non-member. We call the ability of an attacker to distinguish the two (similar) vectors as strong membership inference. We show that membership inference attacks cannot infer membership in this strong setting, and hence inferring attributes is infeasible. However, under a relaxed notion of attribute inference, called approximate attribute inference, we show that it is possible to infer attributes close to the true attributes. We verify our results on three publicly available datasets, five membership, and three attribute inference attacks reported in literature.
引用
收藏
页码:232 / 251
页数:20
相关论文
共 50 条
  • [1] Membership Inference Attacks Against Machine Learning Models
    Shokri, Reza
    Stronati, Marco
    Song, Congzheng
    Shmatikov, Vitaly
    [J]. 2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, : 3 - 18
  • [2] Correlation inference attacks against machine learning models
    Cretu, Ana-Maria
    Guepin, Florent
    de Montjoye, Yves-Alexandre
    [J]. SCIENCE ADVANCES, 2024, 10 (28):
  • [3] AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    [J]. PROCEEDINGS OF THE 27TH USENIX SECURITY SYMPOSIUM, 2018, : 513 - 529
  • [4] Towards Securing Machine Learning Models Against Membership Inference Attacks
    Ben Hamida, Sana
    Mrabet, Hichem
    Belguith, Sana
    Alhomoud, Adeeb
    Jemai, Abderrazak
    [J]. CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 70 (03): : 4897 - 4919
  • [5] Membership Inference Attacks Against Machine Learning Models via Prediction Sensitivity
    Liu, Lan
    Wang, Yi
    Liu, Gaoyang
    Peng, Kai
    Wang, Chen
    [J]. IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (03) : 2341 - 2347
  • [6] Membership Inference Attacks on Machine Learning: A Survey
    Hu, Hongsheng
    Salcic, Zoran
    Sun, Lichao
    Dobbie, Gillian
    Yu, Philip S.
    Zhang, Xuyun
    [J]. ACM COMPUTING SURVEYS, 2022, 54 (11S)
  • [7] Mitigating Membership Inference Attacks in Machine Learning as a Service
    Bouhaddi, Myria
    Adi, Kamel
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON CYBER SECURITY AND RESILIENCE, CSR, 2023, : 262 - 268
  • [8] A Survey on Membership Inference Attacks Against Machine Learning
    Bai, Yang
    Chen, Ting
    Fan, Mingyu
    [J]. International Journal of Network Security, 2021, 23 (04) : 685 - 697
  • [9] Demystifying Membership Inference Attacks in Machine Learning as a Service
    Truex, Stacey
    Liu, Ling
    Gursoy, Mehmet Emre
    Yu, Lei
    Wei, Wenqi
    [J]. IEEE TRANSACTIONS ON SERVICES COMPUTING, 2021, 14 (06) : 2073 - 2089
  • [10] ML-DOCTOR: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models
    Liu, Yugeng
    Wen, Rui
    He, Xinlei
    Salem, Ahmed
    Zhang, Zhikun
    Backes, Michael
    De Cristofaro, Emiliano
    Fritz, Mario
    Zhang, Yang
    [J]. PROCEEDINGS OF THE 31ST USENIX SECURITY SYMPOSIUM, 2022, : 4525 - 4542