Adversarial Attacks on Graph Classification via Bayesian Optimisation

被引:0
|
作者
Wan, Xingchen [1 ]
Kenlay, Henry [1 ]
Ru, Binxin [1 ]
Blaas, Arno [1 ]
Osborne, Michael A. [1 ]
Dong, Xiaowen [1 ]
机构
[1] Univ Oxford, Machine Learning Res Grp, Oxford, England
基金
英国工程与自然科学研究理事会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph neural networks, a popular class of models effective in a wide range of graph-based learning tasks, have been shown to be vulnerable to adversarial attacks. While the majority of the literature focuses on such vulnerability in node-level classification tasks, little effort has been dedicated to analysing adversarial attacks on graph-level classification, an important problem with numerous real-life applications such as biochemistry and social network analysis. The few existing methods often require unrealistic setups, such as access to internal information of the victim models, or an impractically-large number of queries. We present a novel Bayesian optimisation-based attack method for graph classification models. Our method is black-box, query-efficient and parsimonious with respect to the perturbation applied. We empirically validate the effectiveness and flexibility of the proposed method on a wide range of graph classification tasks involving varying graph properties, constraints and modes of attack. Finally, we analyse common interpretable patterns behind the adversarial samples produced, which may shed further light on the adversarial robustness of graph classification models. An open-source implementation is available at https://github.com/xingchenwan/grabnel.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Revisiting Adversarial Attacks on Graph Neural Networks for Graph Classification
    Wang, Xin
    Chang, Heng
    Xie, Beini
    Bian, Tian
    Zhou, Shiji
    Wang, Daixin
    Zhang, Zhiqiang
    Zhu, Wenwu
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (05) : 2166 - 2178
  • [2] Adversarial Attacks on Node Embeddings via Graph Poisoning
    Bojchevski, Aleksandar
    Guennemann, Stephan
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [3] Robust Graph Neural Networks Against Adversarial Attacks via Jointly Adversarial Training
    Tian, Hu
    Ye, Bowei
    Zheng, Xiaolong
    Wu, Desheng Dash
    [J]. IFAC PAPERSONLINE, 2020, 53 (05): : 420 - 425
  • [4] DefenseVGAE: Defending against adversarial attacks on graph data via a variational graph autoencoder
    Department of Information Science, School of Mathematical Sciences, Peking University, Beijing
    100871, China
    [J]. arXiv, 1600,
  • [5] Robust Load Forecasting Towards Adversarial Attacks via Bayesian Learning
    Zhou, Yihong
    Ding, Zhaohao
    Wen, Qingsong
    Wang, Yi
    [J]. IEEE TRANSACTIONS ON POWER SYSTEMS, 2023, 38 (02) : 1445 - 1459
  • [6] Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution Methods
    Bhardwaj, Peru
    Kelleher, John
    Costabello, Luca
    O'Sullivan, Dec
    [J]. 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 8225 - 8239
  • [7] Indirect Adversarial Attacks via Poisoning Neighbors for Graph Convolutional Networks
    Takahashi, Tsubasa
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, : 1395 - 1400
  • [8] Adversarial Attacks on Scene Graph Generation
    Zhao, Mengnan
    Zhang, Lihe
    Wang, Wei
    Kong, Yuqiu
    Yin, Baocai
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 3210 - 3225
  • [9] Adversarial Attacks on Deep Graph Matching
    Zhang, Zijie
    Zhang, Zeru
    Zhou, Yang
    Shen, Yelong
    Jin, Ruoming
    Dou, Dejing
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020), 2020, 33
  • [10] Adversarial attacks against dynamic graph neural networks via node injection
    Jiang, Yanan
    Xia, Hui
    [J]. HIGH-CONFIDENCE COMPUTING, 2024, 4 (01):