A two-stage co-adversarial perturbation to mitigate out-of-distribution generalization of large-scale graph

被引:0
|
作者
Wang, Yili [1 ]
Xue, Haotian [1 ]
Wang, Xin [1 ]
机构
[1] Jilin Univ, Sch Artificial Intelligence, Changchun 130012, Peoples R China
基金
中国国家自然科学基金;
关键词
Graph neural network; Adversarial training; Graph out-of-distribution; NETWORK;
D O I
10.1016/j.eswa.2024.124472
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the realm of graph out-of-distribution (OOD), despite recent strides in advancing graph neural networks (GNNs) for the modeling of graph data, training GNNs on large-scale datasets presents a formidable hurdle due to the pervasive challenge of overfitting. To address these issues, researchers have explored adversarial training, a technique that enriches training data with worst-case adversarial examples. However, while prior work on adversarial training primarily focuses on safeguarding GNNs against malicious attacks, its potential to enhance the OOD generalization abilities of GNNs in the context of graph analytics remains less explored. In our research, we delve into the inner workings of GNNs by examining the landscapes of weight and feature losses, which respectively illustrate how the loss function changes concerning model weights and node features. Our investigation reveals a noteworthy phenomenon: GNNs are inclined to become trapped in sharp local minima within these loss landscapes, resulting in suboptimal OOD generalization performance. To address this challenge, we introduce the concept of co-adversarial perturbation optimization, which considers both model weights and node features, and we design an alternating adversarial perturbation algorithm for graph out-of-distribution generalization. This algorithm operates iteratively, smoothing the weight and feature loss landscapes alternately. Moreover, our training process unfolds in two distinct stages. The first stage centers on standard cross-entropy minimization, ensuring rapid convergence of GNN models. In the second stage, we employ our alternating adversarial training strategy to prevent the models from becoming ensnared in locally sharp minima. Our extensive experiments provide compelling evidence that our CAP approach can generally enhance the OOD generalization performance of GNNs across a diverse range of large-scale graphs.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Graph Invariant Learning with Subgraph Co-mixup for Out-of-Distribution Generalization
    Jia, Tianrui
    Li, Haoyang
    Yang, Cheng
    Tao, Tao
    Shi, Chuan
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 8, 2024, : 8562 - 8570
  • [2] Fast two-stage phasing of large-scale sequence data
    Browning, Brian L.
    Tian, Xiaowen
    Zhou, Ying
    Browning, Sharon R.
    AMERICAN JOURNAL OF HUMAN GENETICS, 2021, 108 (10) : 1880 - 1890
  • [3] Two-Stage Sparse Representation for Robust Recognition on Large-Scale Database
    He, Ran
    Hu, BaoGang
    Zheng, Wei-Shi
    Guo, YanQing
    PROCEEDINGS OF THE TWENTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI-10), 2010, : 475 - 480
  • [4] A two-stage design for multiple testing in large-scale association studies
    Shu-Hui Wen
    Jung-Ying Tzeng
    Jau-Tsuen Kao
    Chuhsing Kate Hsiao
    Journal of Human Genetics, 2006, 51 : 523 - 532
  • [5] Testing successive regression approximations by large-scale two-stage problems
    Deak, Istvan
    ANNALS OF OPERATIONS RESEARCH, 2011, 186 (01) : 83 - 99
  • [6] Two-Stage Precoding Method for the Finitely Large-Scale Antenna Systems
    Joonwoo Shin
    Wireless Personal Communications, 2015, 84 : 2549 - 2559
  • [7] A Two-Stage Vehicle Routing Model for Large-Scale Bioterrorism Emergencies
    Shen, Zhihong
    Dessouky, Maged M.
    Ordonez, Fernando
    NETWORKS, 2009, 54 (04) : 255 - 269
  • [8] Optimization of natural frequencies of large-scale two-stage raft system
    Lv Zhiqiang
    He Lin
    Shuai Changgeng
    13TH INTERNATIONAL CONFERENCE ON MOTION AND VIBRATION CONTROL (MOVIC 2016) AND THE 12TH INTERNATIONAL CONFERENCE ON RECENT ADVANCES IN STRUCTURAL DYNAMICS (RASD 2016), 2016, 744
  • [9] Two-Stage Optimal Scheduling Strategy for Large-Scale Electric Vehicles
    Wang, Xiuyun
    Sun, Chao
    Wang, Rutian
    Wei, Tianyuan
    IEEE ACCESS, 2020, 8 : 13821 - 13832
  • [10] Testing successive regression approximations by large-scale two-stage problems
    István Deák
    Annals of Operations Research, 2011, 186 : 83 - 99