More is Better (Mostly): On the Backdoor Attacks in Federated Graph Neural Networks

被引:5
|
作者
Xu, Jing [1 ]
Wang, Rui [1 ]
Koffas, Stefanos [1 ]
Liang, Kaitai
Picek, Stjepan [1 ,2 ]
机构
[1] Delft Univ Technol, Delft, Netherlands
[2] Radboud Univ Nijmegen, Nijmegen, Netherlands
关键词
backdoor attacks; graph neural networks; federated learning;
D O I
10.1145/3564625.3567999
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Graph Neural Networks (GNNs) are a class of deep learning-based methods for processing graph domain information. GNNs have recently become a widely used graph analysis method due to their superior ability to learn representations for complex graph data. Due to privacy concerns and regulation restrictions, centralized GNNs can be difficult to apply to data-sensitive scenarios. Federated learning (FL) is an emerging technology developed for privacypreserving settings when several parties need to train a shared global model collaboratively. Although several research works have applied FL to train GNNs (Federated GNNs), there is no research on their robustness to backdoor attacks. This paper bridges this gap by conducting two types of backdoor attacks in Federated GNNs: centralized backdoor attacks (CBA) and distributed backdoor attacks (DBA). Our experiments show that the DBA attack success rate is higher than CBA in almost all cases. For CBA, the attack success rate of all local triggers is similar to the global trigger, even if the training set of the adversarial party is embedded with the global trigger. To explore the properties of two backdoor attacks in Federated GNNs, we evaluate the attack performance for a different number of clients, trigger sizes, poisoning intensities, and trigger densities. Finally, we explore the robustness of DBA and CBA against two state-of-the-art defenses. We find that both attacks are robust against the investigated defenses, necessitating the need to consider backdoor attacks in Federated GNNs as a novel threat that requires custom defenses.
引用
收藏
页码:684 / 698
页数:15
相关论文
共 50 条
  • [1] Backdoor Attacks to Graph Neural Networks
    Zhang, Zaixi
    Jia, Jinyuan
    Wang, Binghui
    Gong, Neil Zhenqiang
    [J]. PROCEEDINGS OF THE 26TH ACM SYMPOSIUM ON ACCESS CONTROL MODELS AND TECHNOLOGIES, SACMAT 2021, 2021, : 15 - 26
  • [2] Unveiling the Threat: Investigating Distributed and Centralized Backdoor Attacks in Federated Graph Neural Networks
    Xu, Jing
    Koffas, Stefanos
    Picek, Stjepan
    [J]. DIGITAL THREATS: RESEARCH AND PRACTICE, 2024, 5 (02):
  • [3] Watermarking Graph Neural Networks based on Backdoor Attacks
    Xu, Jing
    Koffas, Stefanos
    Ersoy, Oguzhan
    Picek, Stjepan
    [J]. 2023 IEEE 8TH EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY, EUROS&P, 2023, : 1179 - 1197
  • [4] Backdoor Attacks on Graph Neural Networks Trained with Data Augmentation
    Yashiki, Shingo
    Takahashi, Chako
    Suzuki, Koutarou
    [J]. IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 2024, E107A (03) : 355 - 358
  • [5] Multi-target label backdoor attacks on graph neural networks
    Wang, Kaiyang
    Deng, Huaxin
    Xu, Yijia
    Liu, Zhonglin
    Fang, Yong
    [J]. PATTERN RECOGNITION, 2024, 152
  • [6] Backdoor smoothing: Demystifying backdoor attacks on deep neural networks
    Grosse, Kathrin
    Lee, Taesung
    Biggio, Battista
    Park, Youngja
    Backes, Michael
    Molloy, Ian
    [J]. Computers and Security, 2022, 120
  • [7] Backdoor smoothing: Demystifying backdoor attacks on deep neural networks
    Grosse, Kathrin
    Lee, Taesung
    Biggio, Battista
    Park, Youngja
    Backes, Michael
    Molloy, Ian
    [J]. COMPUTERS & SECURITY, 2022, 120
  • [8] Latent Backdoor Attacks on Deep Neural Networks
    Yao, Yuanshun
    Li, Huiying
    Zheng, Haitao
    Zhao, Ben Y.
    [J]. PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 2041 - 2055
  • [9] Verifying Neural Networks Against Backdoor Attacks
    Pham, Long H.
    Sun, Jun
    [J]. COMPUTER AIDED VERIFICATION (CAV 2022), PT I, 2022, 13371 : 171 - 192
  • [10] Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
    Wang, Bolun
    Yao, Yuanshun
    Shan, Shawn
    Li, Huiying
    Viswanath, Bimal
    Zheng, Haitao
    Zhao, Ben Y.
    [J]. 2019 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP 2019), 2019, : 707 - 723