Adversarial Label Poisoning Attack on Graph Neural Networks via Label Propagation

被引:1
|
作者
Liu, Ganlin [1 ]
Huang, Xiaowei [1 ]
Yi, Xinping [1 ]
机构
[1] Univ Liverpool, Liverpool, England
来源
基金
英国工程与自然科学研究理事会;
关键词
Label poisoning attack; Graph neural networks; Label propagation; Graph convolutional network;
D O I
10.1007/978-3-031-20065-6_14
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph neural networks (GNNs) have achieved outstanding performance in semi-supervised learning tasks with partially labeled graph structured data. However, labeling graph data for training is a challenging task, and inaccurate labels may mislead the training process to erroneous GNN models for node classification. In this paper, we consider label poisoning attacks on training data, where the labels of input data are modified by an adversary before training, to understand to what extent the state-of-the-art GNN models are resistant/vulnerable to such attacks. Specifically, we propose a label poisoning attack framework for graph convolutional networks (GCNs), inspired by the equivalence between label propagation and decoupled GCNs that separate message passing from neural networks. Instead of attacking the entire GCN models, we propose to attack solely label propagation for message passing. It turns out that a gradient-based attack on label propagation is effective and efficient towards the misleading of GCN training. More remarkably, such label attack can be topology-agnostic in the sense that the labels to be attacked can be efficiently chosen without knowing graph structures. Extensive experimental results demonstrate the effectiveness of the proposed method against state-of-the-art GCN-like models.
引用
收藏
页码:227 / 243
页数:17
相关论文
共 50 条
  • [1] Adversarial Label-Flipping Attack and Defense for Graph Neural Networks
    Zhang, Mengmei
    Hu, Linmei
    Shi, Chuan
    Wang, Xiao
    20TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2020), 2020, : 791 - 800
  • [2] Label flipping adversarial attack on graph neural network
    Wu Y.
    Liu W.
    Yu H.
    Tongxin Xuebao/Journal on Communications, 2021, 42 (09): : 65 - 74
  • [3] Label Propagation and Graph Neural Networks
    Benson, Austin
    2021 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS (IPDPSW), 2021, : 241 - 241
  • [4] An effective targeted label adversarial attack on graph neural networks by strategically allocating the attack budget
    Cao, Feilong
    Chen, Qiyang
    Ye, Hailiang
    KNOWLEDGE-BASED SYSTEMS, 2024, 293
  • [5] A Hard Label Black-box Adversarial Attack Against Graph Neural Networks
    Mu, Jiaming
    Wang, Binghui
    Li, Qi
    Sun, Kun
    Xu, Mingwei
    Liu, Zhuotao
    CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 108 - 125
  • [6] Graph structure and homophily for label propagation in Graph Neural Networks
    Vandromme, Maxence
    Petiton, Serge G.
    2023 IEEE 16TH INTERNATIONAL SYMPOSIUM ON EMBEDDED MULTICORE/MANY-CORE SYSTEMS-ON-CHIP, MCSOC, 2023, : 194 - 201
  • [7] Combining Graph Convolutional Neural Networks and Label Propagation
    Wang, Hongwei
    Leskovec, Jure
    ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2022, 40 (04)
  • [8] Role Equivalence Attention for Label Propagation in Graph Neural Networks
    Park, Hogun
    Neville, Jennifer
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2020, PT II, 2020, 12085 : 555 - 567
  • [9] Graphfool: Targeted Label Adversarial Attack on Graph Embedding
    Chen, Jinyin
    Huang, Guohan
    Zheng, Haibin
    Zhang, Dunjie
    Lin, Xiang
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2023, 10 (05) : 2523 - 2535
  • [10] Label Propagation with Neural Networks
    Pal, Aditya
    Chakrabarti, Deepayan
    CIKM'18: PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, 2018, : 1671 - 1674