Black-box Adversarial Attack and Defense on Graph Neural Networks

被引:8
|
作者
Li, Haoyang [1 ]
Di, Shimin [1 ]
Li, Zijian [1 ]
Chen, Lei [1 ]
Cao, Jiannong [2 ]
机构
[1] Hong Kong Univ Sci & Technol, Hong Kong, Peoples R China
[2] Hong Kong Polytech Univ, Hong Kong, Peoples R China
关键词
Graph Neural Network; Adversarial Attack; Graph Defense;
D O I
10.1109/ICDE53745.2022.00081
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph neural networks (GNNs) have achieved great success on various graph tasks. However, recent studies have revealed that GNNs are vulnerable to adversarial attacks, including topology modifications and feature perturbations. Regardless of the fruitful progress, existing attackers require node labels and GNN parameters to optimize a bi-level problem, or cannot cover both topology modifications and feature perturbations, which are not practical, efficient, or effective. In this paper, we propose a black-box attacker PEEGA, which is restricted to access node features and graph topology for practicability. Specifically, we propose to measure the negative impact of various adversarial attacks from the perspective of node representations, thereby we formulate a single-level problem that can be efficiently solved. Furthermore, we observe that existing attackers tend to blur the context of nodes through adding edges between nodes with different labels. As a result, GNNs are unable to recognize nodes. Based on this observation, we propose a GNN defender GNAT, which incorporates three augmented graphs, i.e., a topology graph, a feature graph, and an ego graph, to make the context of nodes more distinguishable. Extensive experiments on three real-world datasets demonstrate the effectiveness and efficiency of our proposed attacker, despite the fact that we do not access node labels and GNN parameters. Moreover, the effectiveness and efficiency of our proposed defender are also validated by substantial experiments.
引用
收藏
页码:1017 / 1030
页数:14
相关论文
共 50 条
  • [1] A Hard Label Black-box Adversarial Attack Against Graph Neural Networks
    Mu, Jiaming
    Wang, Binghui
    Li, Qi
    Sun, Kun
    Xu, Mingwei
    Liu, Zhuotao
    [J]. CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 108 - 125
  • [2] Black-Box Adversarial Attack on Graph Neural Networks With Node Voting Mechanism
    Wen, Liangliang
    Liang, Jiye
    Yao, Kaixuan
    Wang, Zhiqiang
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (10) : 5025 - 5038
  • [3] Black-Box Adversarial Attack on Graph Neural Networks Based on Node Domain Knowledge
    Sun, Qin
    Yang, Zheng
    Liu, Zhiming
    Zou, Quan
    [J]. KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT I, KSEM 2023, 2023, 14117 : 203 - 217
  • [4] Query efficient black-box adversarial attack on deep neural networks
    Bai, Yang
    Wang, Yisen
    Zeng, Yuyuan
    Jiang, Yong
    Xia, Shu-Tao
    [J]. PATTERN RECOGNITION, 2023, 133
  • [5] Cyclical Adversarial Attack Pierces Black-box Deep Neural Networks
    Huang, Lifeng
    Wei, Shuxin
    Gao, Chengying
    Liu, Ning
    [J]. PATTERN RECOGNITION, 2022, 131
  • [6] Black-box Adversarial Attack against Visual Interpreters for Deep Neural Networks
    Hirose, Yudai
    Ono, Satoshi
    [J]. 2023 18TH INTERNATIONAL CONFERENCE ON MACHINE VISION AND APPLICATIONS, MVA, 2023,
  • [7] A CMA-ES-Based Adversarial Attack on Black-Box Deep Neural Networks
    Kuang, Xiaohui
    Liu, Hongyi
    Wang, Ye
    Zhang, Qikun
    Zhang, Quanxin
    Zheng, Jun
    [J]. IEEE ACCESS, 2019, 7 : 172938 - 172947
  • [8] A Black-Box Adversarial Attack Method via Nesterov Accelerated Gradient and Rewiring Towards Attacking Graph Neural Networks
    Zhao, Shu
    Wang, Wenyu
    Du, Ziwei
    Chen, Jie
    Duan, Zhen
    [J]. IEEE TRANSACTIONS ON BIG DATA, 2023, 9 (06) : 1586 - 1597
  • [9] SIMULATOR ATTACK plus FOR BLACK-BOX ADVERSARIAL ATTACK
    Ji, Yimu
    Ding, Jianyu
    Chen, Zhiyu
    Wu, Fei
    Zhang, Chi
    Sun, Yiming
    Sun, Jing
    Liu, Shangdong
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 636 - 640
  • [10] NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks
    Li, Yandong
    Li, Lijun
    Wang, Liqiang
    Zhang, Tong
    Gong, Boqing
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97