Adversarial Attack and Defense on Discrete Time Dynamic Graphs

被引:0
|
作者
Zhao, Ziwei [1 ]
Yang, Yu [2 ]
Yin, Zikai [1 ]
Xu, Tong [1 ]
Zhu, Xi [1 ]
Lin, Fake [1 ]
Li, Xueying [3 ]
Chen, Enhong [1 ]
机构
[1] Univ Sci & Technol China, State Key Lab Cognit Intelligence, Hefei 230026, Peoples R China
[2] City Univ Hong Kong, Sch Data Sci, Kowloon Tong, Hong Kong, Peoples R China
[3] Alibaba Grp, Hangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Training; Robustness; Perturbation methods; Learning systems; Optimization; Topology; Task analysis; Adversarial attack; dynamic graph representation; graph learning; robust training; OPTIMIZATION; QUERIES;
D O I
10.1109/TKDE.2024.3438238
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph learning methods have achieved remarkable performance in various domains such as social recommendation, financial fraud detection, and so on. In real applications, the underlying graph is often dynamically evolving and thus, some recent studies focus on integrating the temporal topology information of graphs into the GNN for learning graph embedding. However, the robustness of training GNNs for dynamic graphs has not been discussed so far. The major reason is how to attack dynamic graph embedding still remains largely untouched, let alone how to defend against the attacks. To enable robust training of GNNs for dynamic graphs, in this paper, we investigate the problem of how to generate attacks and defend against attacks for dynamic graph embedding. Attacking dynamic graph embedding is more challenging than attacking static graph embedding as we need to understand the temporal dynamics of graphs as well as its impact on the embedding and the injected perturbations should be distinguished from the natural evolution. In addition, the defense is very challenging as the perturbations may be hidden within the natural evolution. To tackle these technical challenges, in this paper, we first develop a novel gradient-based attack method from an optimization perspective to generate perturbations to fool dynamic graph learning methods, where a key idea is to use gradient dynamics to attack the natural dynamics of the graph. Further, we borrow the idea of the attack method and integrate it with adversarial training to train a more robust dynamic graph learning method to defend against hand-crafted attacks. Finally, extensive experiments on two real-world datasets demonstrate the effectiveness of the proposed attack and defense method, where our defense method not only achieves comparable performance on clean graphs but also significantly increases the defense performance on attacked graphs.
引用
收藏
页码:7600 / 7611
页数:12
相关论文
共 50 条
  • [1] Sinkhorn Adversarial Attack and Defense
    Subramanyam, A. V.
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 4039 - 4049
  • [2] Adversarial Attack and Defense: A Survey
    Liang, Hongshuo
    He, Erlu
    Zhao, Yangyang
    Jia, Zhe
    Li, Hao
    ELECTRONICS, 2022, 11 (08)
  • [3] Adversarial Attack and Defense in Deep Ranking
    Zhou, Mo
    Wang, Le
    Niu, Zhenxing
    Zhang, Qilin
    Zheng, Nanning
    Hua, Gang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (08) : 5306 - 5324
  • [4] Attack-less adversarial training for a robust adversarial defense
    Ho, Jiacang
    Lee, Byung-Gook
    Kang, Dae-Ki
    APPLIED INTELLIGENCE, 2022, 52 (04) : 4364 - 4381
  • [5] Attack-less adversarial training for a robust adversarial defense
    Jiacang Ho
    Byung-Gook Lee
    Dae-Ki Kang
    Applied Intelligence, 2022, 52 : 4364 - 4381
  • [6] Discrete Adversarial Attack to Models of Code
    Gao, Fengjuan
    Wang, Yu
    Wang, Ke
    PROCEEDINGS OF THE ACM ON PROGRAMMING LANGUAGES-PACMPL, 2023, 7 (PLDI): : 172 - 195
  • [7] Improving the adversarial transferability with relational graphs ensemble adversarial attack
    Pi, Jiatian
    Luo, Chaoyang
    Xia, Fen
    Jiang, Ning
    Wu, Haiying
    Wu, Zhiyou
    FRONTIERS IN NEUROSCIENCE, 2023, 16
  • [8] A Review of Adversarial Attack and Defense for Classification Methods
    Li, Yao
    Cheng, Minhao
    Hsieh, Cho-Jui
    Lee, Thomas C. M.
    AMERICAN STATISTICIAN, 2022, 76 (04): : 329 - 345
  • [9] Attack and Defense of Dynamic Analysis-Based, Adversarial Neural Malware Detection Models
    Stokes, Jack W.
    Wang, De
    Marinescu, Mady
    Marino, Marc
    Bussone, Brian
    2018 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2018), 2018, : 102 - 109
  • [10] Adversarial organization modeling for network attack/defense
    Wu, Ji
    Ye, Chaoqun
    Jin, Shiyao
    INFORMATION SECURITY PRACTICE AND EXPERIENCE, PROCEEDINGS, 2006, 3903 : 90 - 99