Defense against membership inference attack in graph neural networks through graph perturbation

被引:6
|
作者
Wang, Kai [1 ]
Wu, Jinxia [1 ]
Zhu, Tianqing [1 ]
Ren, Wei [1 ]
Hong, Ying [2 ]
机构
[1] China Univ Geosci, Sch Comp Sci, 388 Lumo Rd, Wuhan 430074, Peoples R China
[2] Wuhan Text Univ, Sch Comp Sci & Artificial Intelligence, 1 Sunshine Ave, Wuhan 430200, Peoples R China
关键词
Graph neural network; Graph privacy-preserving; Membership inference attack; Perturbation injection; DEEP LEARNING ARCHITECTURE; PRIVACY;
D O I
10.1007/s10207-022-00646-y
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Graph neural networks have demonstrated remarkable performance in learning node or graph representations for various graph-related tasks. However, learning with graph data or its embedded representations may induce privacy issues when the node representations contain sensitive or private user information. Although many machine learning models or techniques have been proposed for privacy preservation of traditional non-graph structured data, there is limited work to address graph privacy concerns. In this paper, we investigate the privacy problem of embedding representations of nodes, in which an adversary can infer the user's privacy by designing an inference attack algorithm. To address this problem, we develop a defense algorithm against white-box membership inference attacks, based on perturbation injection on the graph. In particular, we employ a graph reconstruction model and inject a certain size of noise into the intermediate output of the model, i.e., the latent representations of the nodes. The experimental results obtained on real-world datasets, along with reasonable usability and privacy metrics, demonstrate that our proposed approach can effectively resist membership inference attacks. Meanwhile, based on our method, the trade-off between usability and privacy brought by defense measures can be observed intuitively, which provides a reference for subsequent research in the field of graph privacy protection.
引用
收藏
页码:497 / 509
页数:13
相关论文
共 50 条
  • [11] Black-box Adversarial Attack and Defense on Graph Neural Networks
    Li, Haoyang
    Di, Shimin
    Li, Zijian
    Chen, Lei
    Cao, Jiannong
    2022 IEEE 38TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2022), 2022, : 1017 - 1030
  • [12] Adversarial Label-Flipping Attack and Defense for Graph Neural Networks
    Zhang, Mengmei
    Hu, Linmei
    Shi, Chuan
    Wang, Xiao
    20TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2020), 2020, : 791 - 800
  • [13] A realistic model extraction attack against graph neural networks
    Guan, Faqian
    Zhu, Tianqing
    Tong, Hanjin
    Zhou, Wanlei
    KNOWLEDGE-BASED SYSTEMS, 2024, 300
  • [14] Single Node Injection Attack against Graph Neural Networks
    Tao, Shuchang
    Cao, Qi
    Shen, Huawei
    Huang, Junjie
    Wu, Yunfan
    Cheng, Xueqi
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 1794 - 1803
  • [15] Unboxing the graph: Towards interpretable graph neural networks for transport prediction through neural relational inference
    Tygesen, Mathias Niemann
    Pereira, Francisco Camara
    Rodrigues, Filipe
    TRANSPORTATION RESEARCH PART C-EMERGING TECHNOLOGIES, 2023, 146
  • [16] Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation
    Wang, Binghui
    Jia, Jinyuan
    Cao, Xiaoyu
    Gong, Neil Zhenqiang
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 1645 - 1653
  • [17] COST AWARE UNTARGETED POISONING ATTACK AGAINST GRAPH NEURAL NETWORKS
    Han, Yuwei
    Lai, Yuni
    Zhu, Yulin
    Zhou, Kai
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 4940 - 4944
  • [18] PIAFGNN: Property Inference Attacks against Federated Graph Neural Networks
    Liu, Jiewen
    Chen, Bing
    Xue, Baolu
    Guo, Mengya
    Xu, Yuntao
    CMC-COMPUTERS MATERIALS & CONTINUA, 2025, 82 (02): : 1857 - 1877
  • [19] CLB-Defense: based on contrastive learning defense for graph neural network against backdoor attack
    Chen J.
    Xiong H.
    Ma H.
    Zheng Y.
    Tongxin Xuebao/Journal on Communications, 2023, 44 (04): : 154 - 166
  • [20] Efficient Attack Graph Analysis through Approximate Inference
    Munoz-Gonzalez, Luis
    Sgandurra, Daniele
    Paudice, Andrea
    Lupu, Emil C.
    ACM TRANSACTIONS ON PRIVACY AND SECURITY, 2017, 20 (03)