A two-stage co-adversarial perturbation to mitigate out-of-distribution generalization of large-scale graph

被引:0
|
作者
Wang, Yili [1 ]
Xue, Haotian [1 ]
Wang, Xin [1 ]
机构
[1] Jilin Univ, Sch Artificial Intelligence, Changchun 130012, Peoples R China
基金
中国国家自然科学基金;
关键词
Graph neural network; Adversarial training; Graph out-of-distribution; NETWORK;
D O I
10.1016/j.eswa.2024.124472
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the realm of graph out-of-distribution (OOD), despite recent strides in advancing graph neural networks (GNNs) for the modeling of graph data, training GNNs on large-scale datasets presents a formidable hurdle due to the pervasive challenge of overfitting. To address these issues, researchers have explored adversarial training, a technique that enriches training data with worst-case adversarial examples. However, while prior work on adversarial training primarily focuses on safeguarding GNNs against malicious attacks, its potential to enhance the OOD generalization abilities of GNNs in the context of graph analytics remains less explored. In our research, we delve into the inner workings of GNNs by examining the landscapes of weight and feature losses, which respectively illustrate how the loss function changes concerning model weights and node features. Our investigation reveals a noteworthy phenomenon: GNNs are inclined to become trapped in sharp local minima within these loss landscapes, resulting in suboptimal OOD generalization performance. To address this challenge, we introduce the concept of co-adversarial perturbation optimization, which considers both model weights and node features, and we design an alternating adversarial perturbation algorithm for graph out-of-distribution generalization. This algorithm operates iteratively, smoothing the weight and feature loss landscapes alternately. Moreover, our training process unfolds in two distinct stages. The first stage centers on standard cross-entropy minimization, ensuring rapid convergence of GNN models. In the second stage, we employ our alternating adversarial training strategy to prevent the models from becoming ensnared in locally sharp minima. Our extensive experiments provide compelling evidence that our CAP approach can generally enhance the OOD generalization performance of GNNs across a diverse range of large-scale graphs.
引用
收藏
页数:11
相关论文
共 50 条
  • [41] Large-scale bioprocess competitiveness: the potential of dynamic metabolic control in two-stage fermentations
    Burg, Jonathan M.
    Cooper, Charles B.
    Ye, Zhixia
    Reed, Benjamin R.
    Moreb, Eirik A.
    Lynch, Michael D.
    CURRENT OPINION IN CHEMICAL ENGINEERING, 2016, 14 : 121 - 136
  • [42] An adaptive two-stage evolutionary algorithm for large-scale continuous multi-objective optimization
    Lin, Qiuzhen
    Li, Jun
    Liu, Songbai
    Ma, Lijia
    Li, Jianqiang
    Chen, Jianyong
    SWARM AND EVOLUTIONARY COMPUTATION, 2023, 77
  • [43] Fast Nonnegative Tensor Factorization for Very Large-Scale Problems Using Two-Stage Procedure
    Phan, Anh Huy
    Cichocki, Andrzej
    2009 3RD IEEE INTERNATIONAL WORKSHOP ON COMPUTATIONAL ADVANCES IN MULTI-SENSOR ADAPTIVE PROCESSING (CAMSAP 2009), 2009, : 297 - 300
  • [44] Fast Nonnegative Tensor Factorization for Very Large-Scale Problems Using Two-Stage Procedure
    Phan, Anh Huy
    Cichocki, Andrzej
    2009 3RD IEEE INTERNATIONAL WORKSHOP ON COMPUTATIONAL ADVANCES IN MULTI-SENSOR ADAPTIVE PROCESSING (CAMSAP), 2009, : 297 - 300
  • [45] A robust two-stage transit-based evacuation model for large-scale disaster response
    Gao, Xuehong
    Nayeem, Moddassir Khan
    Hezam, Ibrahim M.
    MEASUREMENT, 2019, 145 : 713 - 723
  • [46] A two-stage accelerated search strategy for large-scale multi-objective evolutionary algorithm
    Cui, Zhihua
    Wu, Yijing
    Zhao, Tianhao
    Zhang, Wensheng
    Chen, Jinjun
    INFORMATION SCIENCES, 2025, 686
  • [47] MMSE based Two-stage Beamforming for Large-Scale Multi-user MISO Systems
    Jeon, Younghyun
    Song, Changick
    Maeng, Seungjoo
    Park, Myonghee
    Lee, Inkyu
    2016 IEEE 27TH ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR, AND MOBILE RADIO COMMUNICATIONS (PIMRC), 2016, : 662 - 667
  • [48] Lagrangian Decomposition for large-scale two-stage stochastic mixed 0-1 problems
    Escudero, L. F.
    Garin, M. A.
    Perez, G.
    Unzueta, A.
    TOP, 2012, 20 (02) : 347 - 374
  • [49] Lagrangian Decomposition for large-scale two-stage stochastic mixed 0-1 problems
    L. F. Escudero
    M. A. Garín
    G. Pérez
    A. Unzueta
    TOP, 2012, 20 : 347 - 374
  • [50] A two-stage multi-objective evolutionary algorithm for large-scale multi-objective optimization
    Liu, Wei
    Chen, Li
    Hao, Xingxing
    Xie, Fei
    Nan, Haiyang
    Zhai, Honghao
    Yang, Jiyao
    2022 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC), 2022,