GTAT: empowering graph neural networks with cross attention

被引:0
|
作者
Shen, Jiahao [1 ]
Ain, Qura Tul [1 ]
Liu, Yaohua [1 ]
Liang, Banqing [1 ]
Qiang, Xiaoli [2 ]
Kou, Zheng [1 ]
机构
[1] Guangzhou Univ, Inst Comp Sci & Technol, Guangzhou 510006, Peoples R China
[2] Guangzhou Univ, Sch Comp Sci & Cyber Engn, Guangzhou 510006, Peoples R China
来源
SCIENTIFIC REPORTS | 2025年 / 15卷 / 01期
基金
中国国家自然科学基金;
关键词
Graph learning; Graph neural networks; Network topology; Cross attention mechanism;
D O I
10.1038/s41598-025-88993-3
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Graph Neural Networks (GNNs) serve as a powerful framework for representation learning on graph-structured data, capturing the information of nodes by recursively aggregating and transforming the neighboring nodes' representations. Topology in graph plays an important role in learning graph representations and impacts the performance of GNNs. However, current methods fail to adequately integrate topological information into graph representation learning. To better leverage topological information and enhance representation capabilities, we propose the Graph Topology Attention Networks (GTAT). Specifically, GTAT first extracts topology features from the graph's structure and encodes them into topology representations. Then, the representations of node and topology are fed into cross attention GNN layers for interaction. This integration allows the model to dynamically adjust the influence of node features and topological information, thus improving the expressiveness of nodes. Experimental results on various graph benchmark datasets demonstrate GTAT outperforms recent state-of-the-art methods. Further analysis reveals GTAT's capability to mitigate the over-smoothing issue, and its increased robustness against noisy data.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] SEA: Graph Shell Attention in Graph Neural Networks
    Frey, Christian M. M.
    Ma, Yunpu
    Schubert, Matthias
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2022, PT II, 2023, 13714 : 326 - 343
  • [2] Graph Attention Networks for Neural Social Recommendation
    Mu, Nan
    Zha, Daren
    He, Yuanye
    Tang, Zhihao
    2019 IEEE 31ST INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2019), 2019, : 1320 - 1327
  • [3] Understanding Attention and Generalization in Graph Neural Networks
    Knyazev, Boris
    Taylor, Graham W.
    Amer, Mohamed R.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [4] Revisiting Attention-Based Graph Neural Networks for Graph Classification
    Tao, Ye
    Li, Ying
    Wu, Zhonghai
    PARALLEL PROBLEM SOLVING FROM NATURE - PPSN XVII, PPSN 2022, PT I, 2022, 13398 : 442 - 458
  • [5] Attention-based graph neural networks: a survey
    Chengcheng Sun
    Chenhao Li
    Xiang Lin
    Tianji Zheng
    Fanrong Meng
    Xiaobin Rui
    Zhixiao Wang
    Artificial Intelligence Review, 2023, 56 : 2263 - 2310
  • [6] Integrated Convolutional and Graph Attention Neural Networks for Electroencephalography
    Kang, Jae-eon
    Lee, Changha
    Lee, Jong-Hwan
    2024 12TH INTERNATIONAL WINTER CONFERENCE ON BRAIN-COMPUTER INTERFACE, BCI 2024, 2024,
  • [7] GRAPH ATTENTION NEURAL NETWORKS FOR POINT CLOUD RECOGNITION
    Li, Zongmin
    Zhang, Jun
    Li, Guanlin
    Liu, Yujie
    Li, Siyuan
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2019, : 387 - 392
  • [8] Bi-Level Attention Graph Neural Networks
    Iyer, Roshni G.
    Wang, Wei
    Sun, Yizhou
    2021 21ST IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM 2021), 2021, : 1126 - 1131
  • [9] Supervised Attention Using Homophily in Graph Neural Networks
    Chatzianastasis, Michail
    Nikolentzos, Giannis
    Vazirgiannis, Michalis
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT IV, 2023, 14257 : 576 - 586
  • [10] Multi-hop Attention Graph Neural Networks
    Wang, Guangtao
    Ying, Rex
    Huang, Jing
    Leskovec, Jure
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 3089 - 3096