A Hard Label Black-box Adversarial Attack Against Graph Neural Networks

被引:16
|
作者
Mu, Jiaming [1 ,2 ]
Wang, Binghui [3 ]
Li, Qi [1 ,2 ]
Sun, Kun [4 ]
Xu, Mingwei [1 ,2 ]
Liu, Zhuotao [1 ,2 ]
机构
[1] Tsinghua Univ, Inst Network Sci & Cyberspace, Dept Comp Sci, Beijing, Peoples R China
[2] Tsinghua Univ, BNRist, Beijing, Peoples R China
[3] Illinois Inst Technol, Chicago, IL USA
[4] George Mason Univ, Fairfax, VA 22030 USA
基金
国家重点研发计划;
关键词
Black-box adversarial attack; structural perturbation; graph neural networks; graph classification;
D O I
10.1145/3460120.3484796
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Graph Neural Networks (GNNs) have achieved state-of-the-art performance in various graph structure related tasks such as node classification and graph classification. However, GNNs are vulnerable to adversarial attacks. Existing works mainly focus on attacking GNNs for node classification; nevertheless, the attacks against GNNs for graph classification have not been well explored. In this work, we conduct a systematic study on adversarial attacks against GNNs for graph classification via perturbing the graph structure. In particular, we focus on the most challenging attack, i.e., hard label black-box attack, where an attacker has no knowledge about the target GNN model and can only obtain predicted labels through querying the target model. To achieve this goal, we formulate our attack as an optimization problem, whose objective is to minimize the number of edges to be perturbed in a graph while maintaining the high attack success rate. The original optimization problem is intractable to solve, and we relax the optimization problem to be a tractable one, which is solved with theoretical convergence guarantee. We also design a coarse-grained searching algorithm and a query-efficient gradient computation algorithm to decrease the number of queries to the target GNN model. Our experimental results on three real-world datasets demonstrate that our attack can effectively attack representative GNNs for graph classification with less queries and perturbations. We also evaluate the effectiveness of our attack under two defenses: one is well-designed adversarial graph detector and the other is that the target GNN model itself is equipped with a defense to prevent adversarial graph generation. Our experimental results show that such defenses are not effective enough, which highlights more advanced defenses.
引用
下载
收藏
页码:108 / 125
页数:18
相关论文
共 50 条
  • [41] BFS2Adv: Black-box adversarial attack towards hard-to-attack short texts
    Han, Xu
    Li, Qiang
    Cao, Hongbo
    Han, Lei
    Wang, Bin
    Bao, Xuhua
    Han, Yufei
    Wang, Wei
    COMPUTERS & SECURITY, 2024, 141
  • [42] Transferable Black-Box Attack Against Face Recognition With Spatial Mutable Adversarial Patch
    Ma, Haotian
    Xu, Ke
    Jiang, Xinghao
    Zhao, Zeyu
    Sun, Tanfeng
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 5636 - 5650
  • [43] A CMA-ES-Based Adversarial Attack Against Black-Box Object Detectors
    LYU Haoran
    TAN Yu'an
    XUE Yuan
    WANG Yajie
    XUE Jingfeng
    Chinese Journal of Electronics, 2021, 30 (03) : 406 - 412
  • [44] A CMA-ES-Based Adversarial Attack Against Black-Box Object Detectors
    Lyu Haoran
    Tan Yu'an
    Xue Yuan
    Wang Yajie
    Xue Jingfeng
    CHINESE JOURNAL OF ELECTRONICS, 2021, 30 (03) : 406 - 412
  • [45] Black-box Adversarial Attack Against Road Sign Recognition Model via PSO
    Chen J.-Y.
    Chen Z.-Q.
    Zheng H.-B.
    Shen S.-J.
    Su M.-M.
    Ruan Jian Xue Bao/Journal of Software, 2020, 31 (09): : 2785 - 2801
  • [46] HyGloadAttack: Hard-label black-box textual adversarial attacks via hybrid optimization
    Liu, Zhaorong
    Xiong, Xi
    Li, Yuanyuan
    Yu, Yan
    Lu, Jiazhong
    Zhang, Shuai
    Xiong, Fei
    NEURAL NETWORKS, 2024, 178
  • [47] PAT: Geometry-Aware Hard-Label Black-Box Adversarial Attacks on Text
    Ye, Muchao
    Chen, Jinghui
    Miao, Chenglin
    Liu, Han
    Wang, Ting
    Ma, Fenglong
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 3093 - 3104
  • [48] Simple and Efficient Hard Label Black-box Adversarial Attacks in Low Query Budget Regimes
    Shukla, Satya Narayan
    Sahu, Anit Kumar
    Willmott, Devin
    Kolter, Zico
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 1461 - 1469
  • [49] PRADA: Practical Black-box Adversarial Attacks against Neural Ranking Models
    Wu, Chen
    Zhang, Ruqing
    Guo, Jiafeng
    De Rijke, Maarten
    Fan, Yixing
    Cheng, Xueqi
    ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2023, 41 (04)
  • [50] Research Status of Black-Box Intelligent Adversarial Attack Algorithms
    Wei, Jian
    Song, Xiaoqing
    Wang, Qinzhao
    Computer Engineering and Applications, 2023, 59 (13) : 61 - 73