Enhancing Deep Learning Model Privacy Against Membership Inference Attacks Using Privacy-Preserving Oversampling

被引:0
|
作者
Subhasish Ghosh [1 ]
Amit Kr Mandal [1 ]
Agostino Cortesi [2 ]
机构
[1] SRM University AP,Department of Computer Science and Engineering
[2] Ca’ Foscari University,Department of Computer Science
关键词
Oversampling method; Deep neural networks; Membership inference attack; Differential privacy;
D O I
10.1007/s42979-025-03845-1
中图分类号
学科分类号
摘要
The overfitting of deep learning models trained using moderately imbalanced datasets is the main factor in increasing the success rate of membership inference attacks. While many oversampling methods have been designed to minimize the data imbalance, only a few defend the deep neural network models against membership inference attacks. We introduce the privacy preserving synthetic minority oversampling technique (PP-SMOTE), that applies privacy preservation mechanisms during data preprocessing rather than the model training phase. The PP-SMOTE oversampling method adds Laplace noise to generate the synthetic data points of minority classes by considering the L1 sensitivity of the dataset. The PP-SMOTE oversampling method demonstrates lower vulnerability to membership inference attacks than the DNN model trained on datasets oversampled by GAN and SVMSMOTE. The PP-SMOTE oversampling method helps retain more model accuracy and lower membership inference attack accuracy compared to the differential privacy mechanisms such as DP-SGD, and DP-GAN. Experimental results showcase that PP-SMOTE effectively mitigates membership inference attack accuracy to approximately below 0.60 while preserving high model accuracy in terms of AUC score approximately above 0.90. Additionally, the broader confidence score distribution achieved by the PP-SMOTE significantly enhances both model accuracy and mitigation of membership inference attacks (MIA). This is confirmed by the loss-epoch curve which shows stable convergence and minimal overfitting during training. Also, the higher variance in confidence scores complicates efforts of attackers to distinguish training data thereby reducing the risk of MIA.
引用
收藏
相关论文
共 50 条
  • [1] Privacy-preserving generative framework for images against membership inference attacks
    Yang, Ruikang
    Ma, Jianfeng
    Miao, Yinbin
    Ma, Xindi
    IET COMMUNICATIONS, 2023, 17 (01) : 45 - 62
  • [2] Privacy-Preserving Deep Learning and Inference
    Riazi, M. Sadegh
    Koushanfar, Farinaz
    2018 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN (ICCAD) DIGEST OF TECHNICAL PAPERS, 2018,
  • [3] Efficient Privacy-Preserving Federated Learning Against Inference Attacks for IoT
    Miao, Yifeng
    Chen, Siguang
    2023 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC, 2023,
  • [4] Privacy-preserving inference resistant to model extraction attacks
    Byun, Junyoung
    Choi, Yujin
    Lee, Jaewook
    Park, Saerom
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 256
  • [5] On the Security of Privacy-Preserving Machine Learning Against Model Stealing Attacks
    Chaturvedi, Bhuvnesh
    Chakraborty, Anirban
    Chatterje, Ayantika
    Mukhopadhya, Debdeep
    CRYPTOLOGY AND NETWORK SECURITY, PT II, CANS 2024, 2025, 14906 : 96 - 117
  • [6] A Robust Privacy-Preserving Federated Learning Model Against Model Poisoning Attacks
    Yazdinejad, Abbas
    Dehghantanha, Ali
    Karimipour, Hadis
    Srivastava, Gautam
    Parizi, Reza M.
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 6693 - 6708
  • [7] Use the Spear as a Shield: An Adversarial Example Based Privacy-Preserving Technique Against Membership Inference Attacks
    Xue, Mingfu
    Yuan, Chengxiang
    He, Can
    Wu, Yinghao
    Wu, Zhiyu
    Zhang, Yushu
    Liu, Zhe
    Liu, Weiqiang
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2023, 11 (01) : 153 - 169
  • [8] Privacy-Preserving Network Embedding Against Private Link Inference Attacks
    Han, Xiao
    Yang, Yuncong
    Wang, Leye
    Wu, Junjie
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (02) : 847 - 859
  • [9] Privacy-Preserving Deep Learning
    Shokri, Reza
    Shmatikov, Vitaly
    CCS'15: PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2015, : 1310 - 1321
  • [10] Privacy-Preserving Deep Learning
    Shokri, Reza
    Shmatikov, Vitaly
    2015 53RD ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2015, : 909 - 910