Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees

被引:3
|
作者
Wang, Binghui [1 ]
Li, Youqi [2 ,3 ]
Zhou, Pan [4 ]
机构
[1] IIT, Dept Comp Sci, Chicago, IL 60616 USA
[2] Beijing Inst Technol, Sch Cyberspace Sci & Technol, Beijing, Peoples R China
[3] Beijing Inst Technol, Sch Comp Sci, Beijing, Peoples R China
[4] Huazhong Univ Sci & Technol, Hubei Engn Res Ctr Big Data Secur, Sch Cyber Sci & Engn, Wuhan, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
D O I
10.1109/CVPR52688.2022.01302
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph neural networks (GNNs) have achieved state-of-the-art performance in many graph-based tasks such as node classification and graph classification. However, many recent works have demonstrated that an attacker can mislead GNN models by slightly perturbing the graph structure. Existing attacks to GNNs are either under the less practical threat model where the attacker is assumed to access the GNN model parameters, or under the practical black-box threat model but consider perturbing node features that are shown to be not enough effective. In this paper, we aim to bridge this gap and consider black-box attacks to GNNs with structure perturbation as well as with theoretical guarantees. We propose to address this challenge through bandit techniques. Specifically, we formulate our attack as an online optimization with bandit feedback. This original problem is essentially NP-hard due to the fact that perturbing the graph structure is a binary optimization problem. We then propose an online attack based on bandit optimization which is proven to be sublinear to the query number T, i.e., O(root NT3/4) where N is the number of nodes in the graph. Finally, we evaluate our proposed attack by conducting experiments over multiple datasets and GNN models. The experimental results on various citation graphs and image graphs show that our attack is both effective and efficient.
引用
收藏
页码:13369 / 13377
页数:9
相关论文
共 50 条
  • [1] Black-Box Attacks on Graph Neural Networks via White-Box Methods With Performance Guarantees
    Yang, Jielong
    Ding, Rui
    Chen, Jianyu
    Zhong, Xionghu
    Zhao, Huarong
    Xie, Linbo
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (10): : 18193 - 18204
  • [2] Simple Black-Box Adversarial Attacks on Deep Neural Networks
    Narodytska, Nina
    Kasiviswanathan, Shiva
    [J]. 2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2017, : 1310 - 1318
  • [3] Spectral Privacy Detection on Black-box Graph Neural Networks
    Yang, Yining
    Lu, Jialiang
    [J]. 2023 IEEE 98TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2023-FALL, 2023,
  • [4] Black-box Adversarial Attack and Defense on Graph Neural Networks
    Li, Haoyang
    Di, Shimin
    Li, Zijian
    Chen, Lei
    Cao, Jiannong
    [J]. 2022 IEEE 38TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2022), 2022, : 1017 - 1030
  • [5] Local perturbation-based black-box federated learning attack for time series classification
    Chen, Shengbo
    Yuan, Jidong
    Wang, Zhihai
    Sun, Yongqi
    [J]. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 158 : 488 - 500
  • [6] Towards Lightweight Black-Box Attacks Against Deep Neural Networks
    Sun, Chenghao
    Zhang, Yonggang
    Wan Chaoqun
    Wang, Qizhou
    Li, Ya
    Liu, Tongliang
    Han, Bo
    Tian, Xinmei
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [7] Black-Box Adversarial Attack on Graph Neural Networks Based on Node Domain Knowledge
    Sun, Qin
    Yang, Zheng
    Liu, Zhiming
    Zou, Quan
    [J]. KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT I, KSEM 2023, 2023, 14117 : 203 - 217
  • [8] Blacklight: Scalable Defense for Neural Networks against Query-Based Black-Box Attacks
    Li, Huiying
    Shan, Shawn
    Wenger, Emily
    Zhang, Jiayun
    Zheng, Haitao
    Zhao, Ben Y.
    [J]. PROCEEDINGS OF THE 31ST USENIX SECURITY SYMPOSIUM, 2022, : 2117 - 2134
  • [9] Efficient, Direct, and Restricted Black-Box Graph Evasion Attacks to Any-Layer Graph Neural Networks via Influence Function
    Wang, Binghui
    Lin, Minhua
    Zhou, Tianxiang
    Zhou, Pan
    Li, Ang
    Pang, Meng
    Li, Hai
    Chen, Yiran
    [J]. PROCEEDINGS OF THE 17TH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, WSDM 2024, 2024, : 693 - 701
  • [10] A Hard Label Black-box Adversarial Attack Against Graph Neural Networks
    Mu, Jiaming
    Wang, Binghui
    Li, Qi
    Sun, Kun
    Xu, Mingwei
    Liu, Zhuotao
    [J]. CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 108 - 125