GTAT: empowering graph neural networks with cross attention

被引:0
|
作者
Shen, Jiahao [1 ]
Ain, Qura Tul [1 ]
Liu, Yaohua [1 ]
Liang, Banqing [1 ]
Qiang, Xiaoli [2 ]
Kou, Zheng [1 ]
机构
[1] Guangzhou Univ, Inst Comp Sci & Technol, Guangzhou 510006, Peoples R China
[2] Guangzhou Univ, Sch Comp Sci & Cyber Engn, Guangzhou 510006, Peoples R China
来源
SCIENTIFIC REPORTS | 2025年 / 15卷 / 01期
基金
中国国家自然科学基金;
关键词
Graph learning; Graph neural networks; Network topology; Cross attention mechanism;
D O I
10.1038/s41598-025-88993-3
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Graph Neural Networks (GNNs) serve as a powerful framework for representation learning on graph-structured data, capturing the information of nodes by recursively aggregating and transforming the neighboring nodes' representations. Topology in graph plays an important role in learning graph representations and impacts the performance of GNNs. However, current methods fail to adequately integrate topological information into graph representation learning. To better leverage topological information and enhance representation capabilities, we propose the Graph Topology Attention Networks (GTAT). Specifically, GTAT first extracts topology features from the graph's structure and encodes them into topology representations. Then, the representations of node and topology are fed into cross attention GNN layers for interaction. This integration allows the model to dynamically adjust the influence of node features and topological information, thus improving the expressiveness of nodes. Experimental results on various graph benchmark datasets demonstrate GTAT outperforms recent state-of-the-art methods. Further analysis reveals GTAT's capability to mitigate the over-smoothing issue, and its increased robustness against noisy data.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Demystifying Oversmoothing in Attention-Based Graph Neural Networks
    Wu, Xinyi
    Ajorlou, Amir
    Wu, Zihui
    Jadbabaie, Ali
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [22] Empowering Simple Graph Convolutional Networks
    Pasa, Luca
    Navarin, Nicolo
    Erb, Wolfgang
    Sperduti, Alessandro
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 35 (04) : 4367 - 4372
  • [23] Graph Attention Site Prediction (GrASP): Identifying Druggable Binding Sites Using Graph Neural Networks with Attention
    Smith, Zachary
    Strobel, Michael
    Vani, Bodhi P.
    Tiwary, Pratyush
    JOURNAL OF CHEMICAL INFORMATION AND MODELING, 2024, 64 (07) : 2637 - 2644
  • [24] Empowering Digital Twin for Future Networks with Graph Neural Networks: Overview, Enabling Technologies, Challenges, and Opportunities
    Ngo, Duc-Thinh
    Aouedi, Ons
    Piamrat, Kandaraj
    Hassan, Thomas
    Raipin-Parvedy, Philippe
    Mourtzis, Dimitris
    FUTURE INTERNET, 2023, 15 (12)
  • [25] On cross-attention-based graph neural networks for fault diagnosis using multi-sensor measurement
    Ren, Zhenxing
    Zhou, Yu
    STRUCTURAL HEALTH MONITORING-AN INTERNATIONAL JOURNAL, 2025,
  • [26] Graph Neural Network-Based Speech Emotion Recognition: A Fusion of Skip Graph Convolutional Networks and Graph Attention Networks
    Wang, Han
    Kim, Deok-Hwan
    ELECTRONICS, 2024, 13 (21)
  • [27] Attention recurrent cross-graph neural network for selecting premises
    Liu, Qinghua
    Xu, Yang
    He, Xingxing
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2022, 13 (05) : 1301 - 1315
  • [28] Attention recurrent cross-graph neural network for selecting premises
    Qinghua Liu
    Yang Xu
    Xingxing He
    International Journal of Machine Learning and Cybernetics, 2022, 13 : 1301 - 1315
  • [29] Graph Neural Networks With Triple Attention for Few-Shot Learning
    Cheng, Hao
    Zhou, Joey Tianyi
    Tay, Wee Peng
    Wen, Bihan
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 8225 - 8239
  • [30] SAlign: A Graph Neural Attention Framework for Aligning Structurally Heterogeneous Networks
    Saxena S.
    Chandra J.
    Journal of Artificial Intelligence Research, 2023, 77 : 949 - 969