Smoothing Adversarial Training for GNN

被引:12
|
作者
Chen, Jinyin [1 ]
Lin, Xiang [1 ]
Xiong, Hui [1 ]
Wu, Yangyang [1 ]
Zheng, Haibin [1 ]
Xuan, Qi [1 ]
机构
[1] Zhejiang Univ Technol, Inst Cyberspace Secur, Coll Informat Engn, Hangzhou 310023, Peoples R China
基金
中国国家自然科学基金;
关键词
Training; Smoothing methods; Robustness; Topology; Data models; Task analysis; Predictive models; Adversarial attack; adversarial training; complex network; cross-entropy loss; smoothing distillation (SD); LIVE;
D O I
10.1109/TCSS.2020.3042628
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, a graph neural network (GNN) was proposed to analyze various graphs/networks, which has been proven to outperform many other network analysis methods. However, it is also shown that such state-of-the-art methods suffer from adversarial attacks, i.e., carefully crafted adversarial networks with slight perturbation on clean one may invalid these methods on lots of applications, such as network embedding, node classification, link prediction, and community detection. Adversarial training has been testified as an efficient defense strategy against adversarial attacks in computer vision and graph mining. However, almost all the algorithms based on adversarial training focus on global defense through overall adversarial training. In a more practical scene, certain users would be targeted to attack, i.e., specific labeled users. It is still a challenge to defend against target node attack by existing adversarial training methods. Therefore, we propose smoothing adversarial training (SAT) to improve the robustness of GNNs. In particular, we analytically investigate the robustness of graph convolutional network (GCN), one of the classic GNNs, and propose two smooth defensive strategies: smoothing distillation and smoothing cross-entropy loss function. Both of them smooth the gradients of GCN and, consequently, reduce the amplitude of adversarial gradients, benefiting gradient masking from attackers in both global attack and target label node attack. The comprehensive experiments on five real-world networks testify that the proposed SAT method shows state-of-the-art defensibility against different adversarial attacks on node classification and community detection. Especially, the average attack success rate of different attack methods can be decreased by about 40% by SAT at the cost of tolerable embedding performance decline of the original network.
引用
收藏
页码:618 / 629
页数:12
相关论文
共 50 条
  • [1] Improving Single-Step Adversarial Training By Local Smoothing
    Wang, Shaopeng
    Huang, Yanhong
    Shi, Jianqi
    Yang, Yang
    Guo, Xin
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [2] A Multicore GNN Training Accelerator
    Mondal, Sudipta
    Ramprasath, S.
    Zeng, Ziqing
    Kunal, Kishor
    Sapatnekar, Sachin S.
    2023 IEEE/ACM INTERNATIONAL SYMPOSIUM ON LOW POWER ELECTRONICS AND DESIGN, ISLPED, 2023,
  • [3] Auto-Divide GNN: Accelerating GNN Training with Subgraph Division
    Chen, Hongyu
    Ran, Zhejiang
    Ge, Keshi
    Lai, Zhiquan
    Jiang, Jingfei
    Li, Dongsheng
    EURO-PAR 2023: PARALLEL PROCESSING, 2023, 14100 : 367 - 382
  • [4] SMOOTHING MODEL PREDICTIONS USING ADVERSARIAL TRAINING PROCEDURES FOR SPEECH BASED EMOTION RECOGNITION
    Sahu, Saurabh
    Gupta, Rahul
    Sivaraman, Ganesh
    Espy-Wilson, Carol
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 4934 - 4938
  • [5] Model Smoothing using Virtual Adversarial Training for Speech Emotion Estimation using Spontaneity
    Kuwahara, Toyoaki
    Orihara, Ryohei
    Sei, Yuichi
    Tahara, Yasuyuki
    Ohsuga, Akihiko
    ICAART: PROCEEDINGS OF THE 12TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE, VOL 2, 2020, : 570 - 577
  • [6] Accelerating Distributed GNN Training by Codes
    Wang, Yanhong
    Guan, Tianchan
    Niu, Dimin
    Zou, Qiaosha
    Zheng, Hongzhong
    Shi, C. -J. Richard
    Xie, Yuan
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2023, 34 (09) : 2598 - 2614
  • [7] Adversarial Attack on GNN-based SAR Image Classifier
    Ye, Tian
    Kannan, Rajgopal
    Prasanna, Viktor
    Busart, Carl
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS V, 2023, 12538
  • [8] BGS: Accelerate GNN training on multiple GPUs
    Tan, Yujuan
    Bai, Zhuoxin
    Liu, Duo
    Zeng, Zhaoyang
    Gan, Yan
    Ren, Ao
    Chen, Xianzhang
    Zhong, Kan
    JOURNAL OF SYSTEMS ARCHITECTURE, 2024, 153
  • [9] FlashGNN: An In-SSD Accelerator for GNN Training
    Niu, Fuping
    Yue, Jianhui
    Shen, Jiangqiu
    Liao, Xiaofei
    Jin, Hai
    2024 IEEE INTERNATIONAL SYMPOSIUM ON HIGH-PERFORMANCE COMPUTER ARCHITECTURE, HPCA 2024, 2024, : 361 - 378
  • [10] AIC-GNN: Adversarial information completion for graph neural networks
    Wei, Quanmin
    Wang, Jinyan
    Fu, Xingcheng
    Hu, Jun
    Li, Xianxian
    INFORMATION SCIENCES, 2023, 626 : 166 - 179