Graph Invariant Learning with Subgraph Co-mixup for Out-of-Distribution Generalization

被引:0
|
作者
Jia, Tianrui [1 ]
Li, Haoyang [2 ]
Yang, Cheng [1 ]
Tao, Tao [3 ]
Shi, Chuan [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Beijing 1, Peoples R China
[2] Tsinghua Univ, Beijing, Peoples R China
[3] China Mobile Informat Technol Co Ltd, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph neural networks (GNNs) have been demonstrated to perform well in graph representation learning, but always lacking in generalization capability when tackling out-of-distribution (OOD) data. Graph invariant learning methods, backed by the invariance principle among defined multiple environments, have shown effectiveness in dealing with this issue. However, existing methods heavily rely on well-predefined or accurately generated environment partitions, which are hard to be obtained in practice, leading to sub-optimal OOD generalization performances. In this paper, we propose a novel graph invariant learning method based on invariant and variant patterns comixup strategy, which is capable of jointly generating mixed multiple environments and capturing invariant patterns from the mixed graph data. Specifically, we first adopt a subgraph extractor to identify invariant subgraphs. Subsequently, we design one novel co-mixup strategy, i.e., jointly conducting environment mixup and invariant mixup. For the environment mixup, we mix the variant environment-related subgraphs so as to generate sufficiently diverse multiple environments, which is important to guarantee the quality of the graph invariant learning. For the invariant mixup, we mix the invariant subgraphs, further encouraging to capture invariant patterns behind graphs while getting rid of spurious correlations for OOD generalization. We demonstrate that the proposed environment mixup and invariant mixup can mutually promote each other. Extensive experiments on both synthetic and real-world datasets demonstrate that our method significantly outperforms state-of-the-art under various distribution shifts.
引用
收藏
页码:8562 / 8570
页数:9
相关论文
共 50 条
  • [31] Out-of-Distribution Generalization in Kernel Regression
    Canatar, Abdulkadir
    Bordelon, Blake
    Pehlevan, Cengiz
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [32] Causal softmax for out-of-distribution generalization
    Luo, Jing
    Zhao, Wanqing
    Peng, Jinye
    DIGITAL SIGNAL PROCESSING, 2025, 156
  • [33] GGM: Graph Generative Modeling for Out-of-Distribution Generalization in Visual Question Answering
    Jiang, Jingjing
    Liu, Ziyi
    Liu, Yifan
    Nan, Zhixiong
    Zheng, Nanning
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 199 - 208
  • [34] Out-of-distribution Detection Learning with Unreliable Out-of-distribution Sources
    Zheng, Haotian
    Wang, Qizhou
    Fang, Zhen
    Xia, Xiaobo
    Liu, Feng
    Liu, Tongliang
    Han, Bo
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [35] Characterizing Generalization under Out-Of-Distribution Shifts in Deep Metric Learning
    Milbich, Timo
    Roth, Karsten
    Sinha, Samarth
    Schmidt, Ludwig
    Ghassemi, Marzyeh
    Ommer, Bjoern
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [36] CGLearn: Consistent Gradient-Based Learning for Out-of-Distribution Generalization
    Chowdhury, Jawad
    Terejanu, Gabriel
    arXiv,
  • [37] Enhancing Out-of-distribution Generalization on Graphs via Causal Attention Learning
    Sui, Yongduo
    Mao, Wenyu
    Wang, Shuyao
    Wang, Xiang
    Wu, Jiancan
    He, Xiangnan
    Chua, Tat-Seng
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2024, 18 (05)
  • [38] GOOD: A Graph Out-of-Distribution Benchmark
    Gui, Shurui
    Li, Xiner
    Wang, Limei
    Ji, Shuiwang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [39] Winning Prize Comes from Losing Tickets: Improve Invariant Learning by Exploring Variant Parameters for Out-of-Distribution Generalization
    Huang, Zhuo
    Li, Muyang
    Shen, Li
    Yu, Jun
    Gong, Chen
    Han, Bo
    Liu, Tongliang
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2025, 133 (01) : 456 - 474
  • [40] On the Adversarial Robustness of Out-of-distribution Generalization Models
    Zou, Xin
    Liu, Weiwei
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,