Privacy Risks of Securing Machine Learning Models against Adversarial Examples

被引:107
|
作者
Song, Liwei [1 ]
Shokri, Reza [2 ]
Mittal, Prateek [1 ]
机构
[1] Princeton Univ, Princeton, NJ 08544 USA
[2] Natl Univ Singapore, Singapore, Singapore
基金
美国国家科学基金会; 新加坡国家研究基金会;
关键词
machine learning; membership inference attacks; adversarial examples and defenses; DEEP NEURAL-NETWORKS; FACE RECOGNITION;
D O I
10.1145/3319535.3354211
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The arms race between attacks and defenses for machine learning models has come to a forefront in recent years, in both the security community and the privacy community. However, one big limitation of previous research is that the security domain and the privacy domain have typically been considered separately. It is thus unclear whether the defense methods in one domain will have any unexpected impact on the other domain. In this paper, we take a step towards resolving this limitation by combining the two domains. In particular, we measure the success of membership inference attacks against six state-of-the-art defense methods that mitigate the risk of adversarial examples (i.e., evasion attacks). Membership inference attacks determine whether or not an individual data record has been part of a model's training set. The accuracy of such attacks reflects the information leakage of training algorithms about individual members of the training set. Adversarial defense methods against adversarial examples influence the model's decision boundaries such that model predictions remain unchanged for a small area around each input. However, this objective is optimized on training data. Thus, individual data records in the training set have a significant influence on robust models. This makes the models more vulnerable to inference attacks. To perform the membership inference attacks, we leverage the existing inference methods that exploit model predictions. We also propose two new inference methods that exploit structural properties of robust models on adversarially perturbed data. Our experimental evaluation demonstrates that compared with the natural training (undefended) approach, adversarial defense methods can indeed increase the target model's risk against membership inference attacks. When using adversarial defenses to train the robust models, the membership inference advantage increases by up to 4.5 times compared to the naturally undefended models. Beyond revealing the privacy risks of adversarial defenses, we further investigate the factors, such as model capacity, that influence the membership information leakage.
引用
收藏
页码:241 / 257
页数:17
相关论文
共 50 条
  • [31] ADVERSARIAL EXAMPLES FOR GOOD: ADVERSARIAL EXAMPLES GUIDED IMBALANCED LEARNING
    Zhang, Jie
    Zhang, Lei
    Li, Gang
    Wu, Chao
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 136 - 140
  • [32] RobEns: Robust Ensemble Adversarial Machine Learning Framework for Securing IoT Traffic
    Alkadi, Sarah
    Al-Ahmadi, Saad
    Ben Ismail, Mohamed Maher
    [J]. SENSORS, 2024, 24 (08)
  • [33] How Differential Privacy Reinforces Privacy of Machine Learning Models?
    Ben Hamida, Sana
    Mrabet, Hichem
    Jemai, Abderrazak
    [J]. ADVANCES IN COMPUTATIONAL COLLECTIVE INTELLIGENCE, ICCCI 2022, 2022, 1653 : 661 - 673
  • [34] Certified Robustness to Adversarial Examples with Differential Privacy
    Lecuyer, Mathias
    Atlidakis, Vaggelis
    Geambasu, Roxana
    Hsu, Daniel
    Jana, Suman
    [J]. 2019 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP 2019), 2019, : 656 - +
  • [35] Rethinking Adversarial Examples for Location Privacy Protection
    Trung-Nghia Le
    Gu, Ta
    Nguyen, Huy H.
    Echizen, Isao
    [J]. 2022 IEEE INTERNATIONAL WORKSHOP ON INFORMATION FORENSICS AND SECURITY (WIFS), 2022,
  • [36] Adversarial Examples Can Be Effective Data Augmentation for Unsupervised Machine Learning
    Hsu, Chia-Yi
    Chen, Pin-Yu
    Lu, Songtao
    Liu, Sijia
    Yu, Chia-Mu
    [J]. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 6926 - 6934
  • [37] Objective Metrics and Gradient Descent Algorithms for Adversarial Examples in Machine Learning
    Jang, Uyeong
    Wu, Xi
    Jha, Somesh
    [J]. 33RD ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE (ACSAC 2017), 2017, : 262 - 277
  • [38] Transferability of Adversarial Examples in Machine Learning-based Malware Detection
    Hu, Yang
    Wang, Ning
    Chen, Yimin
    Lou, Wenjing
    Hou, Y. Thomas
    [J]. 2022 IEEE CONFERENCE ON COMMUNICATIONS AND NETWORK SECURITY (CNS), 2022, : 28 - 36
  • [39] Privacy Leakage of Adversarial Training Models in Federated Learning Systems
    Zhang, Jingyang
    Chen, Yiran
    Li, Hai
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 107 - 113
  • [40] Adversarial examples for generative models
    Kos, Jernej
    Fischer, Ian
    Song, Dawn
    [J]. 2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (SPW 2018), 2018, : 36 - 42