Adversarial Attack and Defense on Discrete Time Dynamic Graphs

被引:0
|
作者
Zhao, Ziwei [1 ]
Yang, Yu [2 ]
Yin, Zikai [1 ]
Xu, Tong [1 ]
Zhu, Xi [1 ]
Lin, Fake [1 ]
Li, Xueying [3 ]
Chen, Enhong [1 ]
机构
[1] Univ Sci & Technol China, State Key Lab Cognit Intelligence, Hefei 230026, Peoples R China
[2] City Univ Hong Kong, Sch Data Sci, Kowloon Tong, Hong Kong, Peoples R China
[3] Alibaba Grp, Hangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Training; Robustness; Perturbation methods; Learning systems; Optimization; Topology; Task analysis; Adversarial attack; dynamic graph representation; graph learning; robust training; OPTIMIZATION; QUERIES;
D O I
10.1109/TKDE.2024.3438238
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph learning methods have achieved remarkable performance in various domains such as social recommendation, financial fraud detection, and so on. In real applications, the underlying graph is often dynamically evolving and thus, some recent studies focus on integrating the temporal topology information of graphs into the GNN for learning graph embedding. However, the robustness of training GNNs for dynamic graphs has not been discussed so far. The major reason is how to attack dynamic graph embedding still remains largely untouched, let alone how to defend against the attacks. To enable robust training of GNNs for dynamic graphs, in this paper, we investigate the problem of how to generate attacks and defend against attacks for dynamic graph embedding. Attacking dynamic graph embedding is more challenging than attacking static graph embedding as we need to understand the temporal dynamics of graphs as well as its impact on the embedding and the injected perturbations should be distinguished from the natural evolution. In addition, the defense is very challenging as the perturbations may be hidden within the natural evolution. To tackle these technical challenges, in this paper, we first develop a novel gradient-based attack method from an optimization perspective to generate perturbations to fool dynamic graph learning methods, where a key idea is to use gradient dynamics to attack the natural dynamics of the graph. Further, we borrow the idea of the attack method and integrate it with adversarial training to train a more robust dynamic graph learning method to defend against hand-crafted attacks. Finally, extensive experiments on two real-world datasets demonstrate the effectiveness of the proposed attack and defense method, where our defense method not only achieves comparable performance on clean graphs but also significantly increases the defense performance on attacked graphs.
引用
收藏
页码:7600 / 7611
页数:12
相关论文
共 50 条
  • [41] VANET Jamming and Adversarial Attack Defense for Autonomous Vehicle Safety
    Kim, Haeri
    Chung, Jong-Moon
    CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 71 (02): : 3589 - 3605
  • [42] Attack-Aware Detection and Defense to Resist Adversarial Examples
    Jiang, Wei
    He, Zhiyuan
    Zhan, Jinyu
    Pan, Weijia
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2021, 40 (10) : 2194 - 2198
  • [43] Adversarial attack and defense technologies in natural language processing: A survey
    Qiu, Shilin
    Liu, Qihe
    Zhou, Shijie
    Huang, Wen
    NEUROCOMPUTING, 2022, 492 : 278 - 307
  • [44] Adversarial attack defense analysis: An empirical approach in cybersecurity perspective
    Barik, Kousik
    Misra, Sanjay
    SOFTWARE IMPACTS, 2024, 21
  • [45] Adversarial Examples for Graph Data: Deep Insights into Attack and Defense
    Wu, Huijun
    Wang, Chen
    Tyshetskiy, Yuriy
    Docherty, Andrew
    Lu, Kai
    Zhu, Liming
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 4816 - 4823
  • [46] Adversarial Attack Defense Based on the Deep Image Prior Network
    Sutanto, Richard Evan
    Lee, Sukho
    INFORMATION SCIENCE AND APPLICATIONS, 2020, 621 : 519 - 526
  • [47] Adversarial Attack and Defense Strategies of Speaker Recognition Systems: A Survey
    Tan, Hao
    Wang, Le
    Zhang, Huan
    Zhang, Junjian
    Shafiq, Muhammad
    Gu, Zhaoquan
    ELECTRONICS, 2022, 11 (14)
  • [48] Validating and restoring defense in depth using attack graphs
    Lippmann, Richard
    Ingols, Kyle
    Scott, Chris
    Piwowarski, Keith
    Kratkiewicz, Kendra
    Artz, Mike
    Cunningham, Robert
    MILCOM 2006, VOLS 1-7, 2006, : 3720 - +
  • [49] Towards robust adversarial defense on perturbed graphs with noisy labels
    Li, Ding
    Xia, Hui
    Hu, Chunqiang
    Zhang, Rui
    Du, Yu
    Feng, Xiaolong
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 269
  • [50] Adversarial Danger Identification on Temporally Dynamic Graphs
    Liu, Fuqiang
    Tian, Jingbo
    Miranda-Moreno, Luis
    Sun, Lijun
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (04) : 4744 - 4755