Graph-Fraudster: Adversarial Attacks on Graph Neural Network-Based Vertical Federated Learning

被引:10
|
作者
Chen, Jinyin [1 ]
Huang, Guohan [2 ]
Zheng, Haibin [3 ]
Yu, Shanqing [1 ]
Jiang, Wenrong [4 ]
Cui, Chen [5 ]
机构
[1] Zhejiang Univ Technol, Inst Cyberspace Secur, Coll Informat Engn, Hangzhou 310023, Peoples R China
[2] Zhejiang Univ Technol, Coll Informat Engn, Hangzhou 310023, Peoples R China
[3] Zhejiang Univ Technol, Inst Cyberspace Secur, Coll Comp Sci & Technol, Hangzhou 310023, Peoples R China
[4] Big Data & Cyber Secur Res Inst, Zhejiang Police Coll, Hangzhou 310053, Peoples R China
[5] Hangzhou Dianzi Univ, Coll Comp Sci, Hangzhou 310018, Peoples R China
基金
中国国家自然科学基金;
关键词
Servers; Perturbation methods; Data privacy; Security; Privacy; Data models; Training; Adversarial attack; defense; graph neural network (GNN); privacy leakage; vertical federated learning (VFL);
D O I
10.1109/TCSS.2022.3161016
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Graph neural network (GNN) has achieved great success on graph representation learning. Challenged by large-scale private data collected from user side, GNN may not be able to reflect the excellent performance, without rich features and complete adjacent relationships. Addressing the problem, vertical federated learning (VFL) is proposed to implement local data protection through training a global model collaboratively. Consequently, for graph-structured data, it is a natural idea to construct a GNN-based VFL (GVFL) framework. However, GNN has been proven vulnerable to adversarial attacks. Whether the vulnerability will be brought into the GVFL has not been studied. This is the first study of adversarial attacks on GVFL. A novel adversarial attack method is proposed, named Graph-Fraudster. It generates adversarial perturbations based on the noise-added global node embeddings via the privacy leakage and the gradient of pairwise node. Specifically, first, Graph-Fraudster steals the global node embeddings and sets up a shadow model of the server for the attack generator. Second, noise is added into node embeddings to confuse the shadow model. Finally, the gradient of pairwise node is used to generate attacks with the guidance of noise-added node embeddings. Extensive experiments on five benchmark datasets demonstrate that Graph-Fraudster achieves the state-of-the-art attack performance compared with baselines in different GNN based GVFLs. Furthermore, Graph-Fraudster can remain a threat to GVFL even if two possible defense mechanisms are applied. In addition, some suggestions are put forward for the future work to improve the robustness of GVFL. The code and datasets can be downloaded at https://github.com/hgh0545/Graph-Fraudster.
引用
收藏
页码:492 / 506
页数:15
相关论文
共 50 条
  • [1] A Dual Robust Graph Neural Network Against Graph Adversarial Attacks
    Tao, Qian
    Liao, Jianpeng
    Zhang, Enze
    Li, Lusi
    [J]. NEURAL NETWORKS, 2024, 175
  • [2] Adversarial Attacks on Graph Neural Network Based on Local Influence Analysis Model
    Wu Yiteng
    Liu Wei
    Yu Hongtao
    Cao Xiaochun
    [J]. JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2022, 44 (07) : 2576 - 2583
  • [3] A neural network-based vertical federated learning framework with server integration
    Anees, Amir
    Field, Matthew
    Holloway, Lois
    [J]. Engineering Applications of Artificial Intelligence, 2024, 138
  • [4] Revisiting Adversarial Attacks on Graph Neural Networks for Graph Classification
    Wang, Xin
    Chang, Heng
    Xie, Beini
    Bian, Tian
    Zhou, Shiji
    Wang, Daixin
    Zhang, Zhiqiang
    Zhu, Wenwu
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (05) : 2166 - 2178
  • [5] Preserving node similarity adversarial learning graph representation with graph neural network
    Yang, Shangying
    Zhang, Yinglong
    Jiawei, E.
    Xia, Xuewen
    Xu, Xing
    [J]. ENGINEERING REPORTS, 2024,
  • [6] Adversarial Attacks on Neural Networks for Graph Data
    Zuegner, Daniel
    Akbarnejad, Amir
    Guennemann, Stephan
    [J]. PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 6246 - 6250
  • [7] Exploratory Adversarial Attacks on Graph Neural Networks
    Lin, Xixun
    Zhou, Chuan
    Yang, Hong
    Wu, Jia
    Wang, Haibo
    Cao, Yanan
    Wang, Bin
    [J]. 20TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2020), 2020, : 1136 - 1141
  • [8] Adversarial Attacks on Neural Networks for Graph Data
    Zuegner, Daniel
    Akbarnejad, Amir
    Guennemann, Stephan
    [J]. KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, : 2847 - 2856
  • [9] Backdoor Attack Against Split Neural Network-Based Vertical Federated Learning
    He, Ying
    Shen, Zhili
    Hua, Jingyu
    Dong, Qixuan
    Niu, Jiacheng
    Tong, Wei
    Huang, Xu
    Li, Chen
    Zhong, Sheng
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 748 - 763
  • [10] Graph Neural Network-based Vulnerability Predication
    Feng, Qi
    Feng, Chendong
    Hong, Weijiang
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE MAINTENANCE AND EVOLUTION (ICSME 2020), 2020, : 800 - 801