Self-supervised hypergraph structure learning

被引:0
|
作者
Li, Mingyuan [1 ,2 ,3 ,4 ]
Yang, Yanlin [1 ,2 ,3 ,4 ]
Meng, Lei [5 ]
Peng, Lu [1 ,2 ,3 ,4 ]
Zhao, Haixing [2 ,3 ,4 ,5 ]
Ye, Zhonglin [1 ,2 ,3 ,4 ]
机构
[1] Qinghai Normal Univ, Coll Comp, Xining 810001, Qinghai, Peoples R China
[2] State Key Lab Tibetan Lntelligent Informat Proc &, Xining 810001, Qinghai, Peoples R China
[3] Minist Educ, Key Lab Tibetan Informat Proc, Xining 810001, Qinghai, Peoples R China
[4] Qinghai Tibetan Informat Res Ctr, Xining 810001, Qinghai, Peoples R China
[5] Qinghai Minzu Univ, Sch Intelligent Sci & Engn, Xining 810007, Qinghai, Peoples R China
关键词
Self-supervised; Hypergraph structure learning; Optimizing hypergraph structure; Hypergraph neural networks; Joint optimization;
D O I
10.1007/s10462-025-11199-6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Traditional Hypergraph Neural Networks (HGNNs) often assume that hypergraph structures are perfectly constructed, yet real-world hypergraphs are typically corrupted by noise, missing data, or irrelevant information, limiting the effectiveness of hypergraph learning. To address this challenge, we propose SHSL, a novel Self-supervised Hypergraph Structure Learning framework that jointly explores and optimizes hypergraph structures without external labels. SHSL consists of two key components: a self-organizing initialization module that constructs latent hypergraph representations, and a differentiable optimization module that refines hypergraphs through gradient-based learning. These modules collaboratively capture high-order dependencies to enhance hypergraph representations. Furthermore, SHSL introduces a dual learning mechanism to simultaneously guide structure exploration and optimization within a unified framework. Experiments on six public datasets demonstrate that SHSL outperforms state-of-the-art baselines, achieving Accuracy improvements of 1.36%-32.37% and 2.23%-27.54% on hypergraph exploration and optimization tasks, and 1.19%-8.4% on non-hypergraph datasets. Robustness evaluations further validate SHSL's effectiveness under noisy and incomplete scenarios, highlighting its practical applicability. The implementation of SHSL and all experimental codes are publicly available at: https://github.com/MingyuanLi88888/SHSL.
引用
收藏
页数:30
相关论文
共 50 条
  • [21] Self-Supervised Dialogue Learning
    Wu, Jiawei
    Wang, Xin
    Wang, William Yang
    57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), 2019, : 3857 - 3867
  • [22] Self-supervised learning model
    Saga, Kazushie
    Sugasaka, Tamami
    Sekiguchi, Minoru
    Fujitsu Scientific and Technical Journal, 1993, 29 (03): : 209 - 216
  • [23] Longitudinal self-supervised learning
    Zhao, Qingyu
    Liu, Zixuan
    Adeli, Ehsan
    Pohl, Kilian M.
    MEDICAL IMAGE ANALYSIS, 2021, 71
  • [24] Credal Self-Supervised Learning
    Lienen, Julian
    Huellermeier, Eyke
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [25] Self-Supervised Learning for Recommendation
    Huang, Chao
    Xia, Lianghao
    Wang, Xiang
    He, Xiangnan
    Yin, Dawei
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 5136 - 5139
  • [26] Quantum self-supervised learning
    Jaderberg, B.
    Anderson, L. W.
    Xie, W.
    Albanie, S.
    Kiffner, M.
    Jaksch, D.
    QUANTUM SCIENCE AND TECHNOLOGY, 2022, 7 (03):
  • [27] Self-Supervised Learning for Electroencephalography
    Rafiei, Mohammad H.
    Gauthier, Lynne V.
    Adeli, Hojjat
    Takabi, Daniel
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (02) : 1457 - 1471
  • [28] A multi-scale self-supervised hypergraph contrastive learning framework for video question answering
    Wang, Zheng
    Wu, Bin
    Ota, Kaoru
    Dong, Mianxiong
    Li, He
    NEURAL NETWORKS, 2023, 168 : 272 - 286
  • [29] Group Identification via Transitional Hypergraph Convolution with Cross-view Self-supervised Learning
    Yang, Mingdai
    Liu, Zhiwei
    Yang, Liangwei
    Liu, Xiaolong
    Wang, Chen
    Peng, Hao
    Yu, Philip S.
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 2969 - 2979
  • [30] A New Self-supervised Method for Supervised Learning
    Yang, Yuhang
    Ding, Zilin
    Cheng, Xuan
    Wang, Xiaomin
    Liu, Ming
    INTERNATIONAL CONFERENCE ON COMPUTER VISION, APPLICATION, AND DESIGN (CVAD 2021), 2021, 12155