A Hard Label Black-box Adversarial Attack Against Graph Neural Networks

被引:16
|
作者
Mu, Jiaming [1 ,2 ]
Wang, Binghui [3 ]
Li, Qi [1 ,2 ]
Sun, Kun [4 ]
Xu, Mingwei [1 ,2 ]
Liu, Zhuotao [1 ,2 ]
机构
[1] Tsinghua Univ, Inst Network Sci & Cyberspace, Dept Comp Sci, Beijing, Peoples R China
[2] Tsinghua Univ, BNRist, Beijing, Peoples R China
[3] Illinois Inst Technol, Chicago, IL USA
[4] George Mason Univ, Fairfax, VA 22030 USA
基金
国家重点研发计划;
关键词
Black-box adversarial attack; structural perturbation; graph neural networks; graph classification;
D O I
10.1145/3460120.3484796
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Graph Neural Networks (GNNs) have achieved state-of-the-art performance in various graph structure related tasks such as node classification and graph classification. However, GNNs are vulnerable to adversarial attacks. Existing works mainly focus on attacking GNNs for node classification; nevertheless, the attacks against GNNs for graph classification have not been well explored. In this work, we conduct a systematic study on adversarial attacks against GNNs for graph classification via perturbing the graph structure. In particular, we focus on the most challenging attack, i.e., hard label black-box attack, where an attacker has no knowledge about the target GNN model and can only obtain predicted labels through querying the target model. To achieve this goal, we formulate our attack as an optimization problem, whose objective is to minimize the number of edges to be perturbed in a graph while maintaining the high attack success rate. The original optimization problem is intractable to solve, and we relax the optimization problem to be a tractable one, which is solved with theoretical convergence guarantee. We also design a coarse-grained searching algorithm and a query-efficient gradient computation algorithm to decrease the number of queries to the target GNN model. Our experimental results on three real-world datasets demonstrate that our attack can effectively attack representative GNNs for graph classification with less queries and perturbations. We also evaluate the effectiveness of our attack under two defenses: one is well-designed adversarial graph detector and the other is that the target GNN model itself is equipped with a defense to prevent adversarial graph generation. Our experimental results show that such defenses are not effective enough, which highlights more advanced defenses.
引用
下载
收藏
页码:108 / 125
页数:18
相关论文
共 50 条
  • [31] Adaptive hyperparameter optimization for black-box adversarial attack
    Zhenyu Guan
    Lixin Zhang
    Bohan Huang
    Bihe Zhao
    Song Bian
    International Journal of Information Security, 2023, 22 : 1765 - 1779
  • [32] An Effective Way to Boost Black-Box Adversarial Attack
    Feng, Xinjie
    Yao, Hongxun
    Che, Wenbin
    Zhang, Shengping
    MULTIMEDIA MODELING (MMM 2020), PT I, 2020, 11961 : 393 - 404
  • [33] SCHMIDT: IMAGE AUGMENTATION FOR BLACK-BOX ADVERSARIAL ATTACK
    Shi, Yucheng
    Han, Yahong
    2018 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2018,
  • [34] Black-Box Adversarial Attack via Overlapped Shapes
    Williams, Phoenix
    Li, Ke
    Min, Geyong
    PROCEEDINGS OF THE 2022 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE COMPANION, GECCO 2022, 2022, : 467 - 468
  • [35] Black-box Bayesian adversarial attack with transferable priors
    Shudong Zhang
    Haichang Gao
    Chao Shu
    Xiwen Cao
    Yunyi Zhou
    Jianping He
    Machine Learning, 2024, 113 : 1511 - 1528
  • [36] Adaptive hyperparameter optimization for black-box adversarial attack
    Guan, Zhenyu
    Zhang, Lixin
    Huang, Bohan
    Zhao, Bihe
    Bian, Song
    INTERNATIONAL JOURNAL OF INFORMATION SECURITY, 2023, 22 (06) : 1765 - 1779
  • [37] Black-box Universal Adversarial Attack on Text Classifiers
    Zhang, Yu
    Shao, Kun
    Yang, Junan
    Liu, Hui
    2021 2ND ASIA CONFERENCE ON COMPUTERS AND COMMUNICATIONS (ACCC 2021), 2021, : 1 - 5
  • [38] Black-Box Adversarial Attack on Time Series Classification
    Ding, Daizong
    Zhang, Mi
    Feng, Fuli
    Huang, Yuanmin
    Jiang, Erling
    Yang, Min
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 6, 2023, : 7358 - 7368
  • [39] Adversarial Label-Flipping Attack and Defense for Graph Neural Networks
    Zhang, Mengmei
    Hu, Linmei
    Shi, Chuan
    Wang, Xiao
    20TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2020), 2020, : 791 - 800
  • [40] A Black-Box Attack on Neural Networks Based on Swarm Evolutionary Algorithm
    Liu, Xiaolei
    Hu, Teng
    Ding, Kangyi
    Bai, Yang
    Niu, Weina
    Lu, Jiazhong
    INFORMATION SECURITY AND PRIVACY, ACISP 2020, 2020, 12248 : 268 - 284