Simple and deep graph attention networks

被引:1
|
作者
Su, Guangxin [1 ]
Wang, Hanchen [2 ]
Zhang, Ying [3 ]
Zhang, Wenjie [1 ]
Lin, Xuemin [4 ]
机构
[1] Univ New South Wales, Sydney, NSW 2052, Australia
[2] Univ Technol Sydney, Ultimo, NSW 2007, Australia
[3] Zhejiang Gongshang Univ, Hangzhou 314423, Zhejiang, Peoples R China
[4] Shanghai Jiao Tong Univ, Shanghai 200240, Peoples R China
关键词
Deep graph neural networks; Oversmoothing; Graph attention networks; Attention mechanism;
D O I
10.1016/j.knosys.2024.111649
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph Attention Networks (GATs) and Graph Convolutional Neural Networks (GCNs) are two state-of-the-art architectures in Graph Neural Networks (GNNs). It is well known that both models suffer from performance degradation when more GNN layers are stacked, and many works have been devoted to address this problem. We notice that main research efforts in the line focus on the GCN models, and their techniques cannot well fit the GAT models due to the inherent difference between these two architectures. In GAT, the attention mechanism is limited as it ignores the overwhelming propagation from certain nodes as the number of layers increases. To sufficiently utilize the expressive power of GAT, we propose a new version of GAT named Layer -wise Self -adaptive GAT (LSGAT), which can effectively alleviate the oversmoothing issue in deep GAT and is strictly more expressive than GAT. We redesign the attention coefficients computation mechanism adaptively adjusted by layer depth, which considers both immediate neighboring and non -adjacent nodes from a global view. The experimental evaluation confirms that LSGAT consistently achieves better results on node classification tasks over relevant counterparts.
引用
下载
收藏
页数:9
相关论文
共 50 条
  • [1] Simple and Deep Graph Convolutional Networks
    Chen, Ming
    Wei, Zhewei
    Huang, Zengfeng
    Ding, Bolin
    Li, Yaliang
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [2] Simple and Deep Graph Convolutional Networks
    Chen, Ming
    Wei, Zhewei
    Huang, Zengfeng
    Ding, Bolin
    Li, Yaliang
    25TH AMERICAS CONFERENCE ON INFORMATION SYSTEMS (AMCIS 2019), 2019,
  • [3] Transferable graph neural networks with deep alignment attention
    Xie, Ying
    Xu, Rongbin
    Yang, Yun
    INFORMATION SCIENCES, 2023, 643
  • [4] Deep Attention Diffusion Graph Neural Networks for Text Classification
    Liu, Yonghao
    Guan, Renchu
    Giunchiglia, Fausto
    Liang, Yanchun
    Feng, Xiaoyue
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 8142 - 8152
  • [5] Graph-context Attention Networks for Size-varied Deep Graph Matching
    Jiang, Zheheng
    Rahmani, Hossein
    Angelov, Plamen
    Black, Sue
    Williams, Bryan M.
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 2333 - 2342
  • [6] Deep relational self-Attention networks for scene graph generation
    Li, Ping
    Yu, Zhou
    Zhan, Yibing
    PATTERN RECOGNITION LETTERS, 2022, 153 : 200 - 206
  • [7] Deep relational self-Attention networks for scene graph generation
    Li, Ping
    Yu, Zhou
    Zhan, Yibing
    Pattern Recognition Letters, 2022, 153 : 200 - 206
  • [8] Deep multi-graph neural networks with attention fusion for recommendation
    Song, Yuzhi
    Ye, Hailiang
    Li, Ming
    Cao, Feilong
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 191
  • [9] A REGULARIZED ATTENTION MECHANISM FOR GRAPH ATTENTION NETWORKS
    Shanthamallu, Uday Shankar
    Jayaraman, J. Thiagarajan
    Spanias, Andreas
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 3372 - 3376
  • [10] Graph Ordering Attention Networks
    Chatzianastasis, Michail
    Lutzeyer, Johannes
    Dasoulas, George
    Vazirgiannis, Michalis
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 6, 2023, : 7006 - 7014