A two-stage co-adversarial perturbation to mitigate out-of-distribution generalization of large-scale graph

被引:0
|
作者
Wang, Yili [1 ]
Xue, Haotian [1 ]
Wang, Xin [1 ]
机构
[1] Jilin Univ, Sch Artificial Intelligence, Changchun 130012, Peoples R China
基金
中国国家自然科学基金;
关键词
Graph neural network; Adversarial training; Graph out-of-distribution; NETWORK;
D O I
10.1016/j.eswa.2024.124472
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the realm of graph out-of-distribution (OOD), despite recent strides in advancing graph neural networks (GNNs) for the modeling of graph data, training GNNs on large-scale datasets presents a formidable hurdle due to the pervasive challenge of overfitting. To address these issues, researchers have explored adversarial training, a technique that enriches training data with worst-case adversarial examples. However, while prior work on adversarial training primarily focuses on safeguarding GNNs against malicious attacks, its potential to enhance the OOD generalization abilities of GNNs in the context of graph analytics remains less explored. In our research, we delve into the inner workings of GNNs by examining the landscapes of weight and feature losses, which respectively illustrate how the loss function changes concerning model weights and node features. Our investigation reveals a noteworthy phenomenon: GNNs are inclined to become trapped in sharp local minima within these loss landscapes, resulting in suboptimal OOD generalization performance. To address this challenge, we introduce the concept of co-adversarial perturbation optimization, which considers both model weights and node features, and we design an alternating adversarial perturbation algorithm for graph out-of-distribution generalization. This algorithm operates iteratively, smoothing the weight and feature loss landscapes alternately. Moreover, our training process unfolds in two distinct stages. The first stage centers on standard cross-entropy minimization, ensuring rapid convergence of GNN models. In the second stage, we employ our alternating adversarial training strategy to prevent the models from becoming ensnared in locally sharp minima. Our extensive experiments provide compelling evidence that our CAP approach can generally enhance the OOD generalization performance of GNNs across a diverse range of large-scale graphs.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] A two-stage design for multiple testing in large-scale association studies
    Wen, Shu-Hui
    Tzeng, Jung-Ying
    Kao, Jau-Tsuen
    Hsiao, Chuhsing Kate
    JOURNAL OF HUMAN GENETICS, 2006, 51 (06) : 523 - 532
  • [22] A two-stage heuristic method for the planning of medium voltage distribution networks with large-scale distributed generation
    Tao, Xiaohu
    Haubrich, Hans-Jurgen
    2006 INTERNATIONAL CONFERENCE ON PROBABILISTIC METHODS APPLIED TO POWER SYSTEMS, VOLS 1 AND 2, 2006, : 793 - 798
  • [23] Solution of a large-scale two-stage decision and scheduling problem using decomposition
    Al-Khayyal, F
    Griffin, PM
    Smith, NR
    EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2001, 132 (02) : 453 - 465
  • [24] TWO-STAGE ANALYSIS OF LARGE-SCALE PROTOCOL INFORMATION IN MOBILE STORAGE SYSTEMS
    Jeong, Junyong
    Song, Yong Ho
    PROCEEDINGS OF 2016 5TH IEEE INTERNATIONAL CONFERENCE ON NETWORK INFRASTRUCTURE AND DIGITAL CONTENT (IEEE IC-NIDC 2016), 2016, : 224 - 228
  • [25] Two-stage Discriminative Re-ranking for Large-scale Landmark Retrieval
    Yokoo, Shuhei
    Ozaki, Kohei
    Simo-Serra, Edgar
    Iizuka, Satoshi
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 4363 - 4370
  • [26] Two-Stage Attention Model to Solve Large-Scale Traveling Salesman Problems
    He, Qi
    Wang, Feng
    Song, Jingge
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT II, 2024, 14448 : 119 - 130
  • [27] Cooperative Coevolution with Two-Stage Decomposition for Large-Scale Global Optimization Problems
    Yue, H. D.
    Sun, Y.
    DISCRETE DYNAMICS IN NATURE AND SOCIETY, 2021, 2021
  • [28] Two-stage based ensemble optimization framework for large-scale global optimization
    Wang, Yu
    Huang, Jin
    Dong, Wei Shan
    Yan, Jun Chi
    Tian, Chun Hua
    Li, Min
    Mo, Wen Ting
    EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2013, 228 (02) : 308 - 320
  • [29] A two-stage evolutionary algorithm for large-scale sparse multiobjective optimization problems
    Jiang, Jing
    Han, Fei
    Wang, Jie
    Ling, Qinghua
    Han, Henry
    Wang, Yue
    SWARM AND EVOLUTIONARY COMPUTATION, 2022, 72
  • [30] Two-Stage Robust and Sparse Distributed Statistical Inference for Large-Scale Data
    Mozafari-Majd, Emadaldin
    Koivunen, Visa
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2022, 70 : 5351 - 5365