Simple and deep graph attention networks

被引:1
|
作者
Su, Guangxin [1 ]
Wang, Hanchen [2 ]
Zhang, Ying [3 ]
Zhang, Wenjie [1 ]
Lin, Xuemin [4 ]
机构
[1] Univ New South Wales, Sydney, NSW 2052, Australia
[2] Univ Technol Sydney, Ultimo, NSW 2007, Australia
[3] Zhejiang Gongshang Univ, Hangzhou 314423, Zhejiang, Peoples R China
[4] Shanghai Jiao Tong Univ, Shanghai 200240, Peoples R China
关键词
Deep graph neural networks; Oversmoothing; Graph attention networks; Attention mechanism;
D O I
10.1016/j.knosys.2024.111649
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph Attention Networks (GATs) and Graph Convolutional Neural Networks (GCNs) are two state-of-the-art architectures in Graph Neural Networks (GNNs). It is well known that both models suffer from performance degradation when more GNN layers are stacked, and many works have been devoted to address this problem. We notice that main research efforts in the line focus on the GCN models, and their techniques cannot well fit the GAT models due to the inherent difference between these two architectures. In GAT, the attention mechanism is limited as it ignores the overwhelming propagation from certain nodes as the number of layers increases. To sufficiently utilize the expressive power of GAT, we propose a new version of GAT named Layer -wise Self -adaptive GAT (LSGAT), which can effectively alleviate the oversmoothing issue in deep GAT and is strictly more expressive than GAT. We redesign the attention coefficients computation mechanism adaptively adjusted by layer depth, which considers both immediate neighboring and non -adjacent nodes from a global view. The experimental evaluation confirms that LSGAT consistently achieves better results on node classification tasks over relevant counterparts.
引用
下载
收藏
页数:9
相关论文
共 50 条
  • [41] DAGCN: Dual Attention Graph Convolutional Networks
    Chen, Fengwen
    Pan, Shirui
    Jiang, Jing
    Huo, Huan
    Long, Guodong
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [42] Graph Attention Networks for Anti-Spoofing
    Tak, Hemlata
    Jung, Jee-weon
    Patino, Jose
    Todisco, Massimiliano
    Evans, Nicholas
    INTERSPEECH 2021, 2021, : 2356 - 2360
  • [43] Probabilistic Logic Graph Attention Networks for Reasoning
    Vardhan, L. Vivek Harsha
    Jia, Guo
    Kok, Stanley
    WWW'20: COMPANION PROCEEDINGS OF THE WEB CONFERENCE 2020, 2020, : 669 - 673
  • [44] Fairness-aware Graph Attention Networks
    Kose, O. Deniz
    Shen, Yanning
    2022 56TH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 2022, : 843 - 846
  • [45] Backpropagation Computation for Training Graph Attention Networks
    Gould, Joe
    Parhi, Keshab K.
    JOURNAL OF SIGNAL PROCESSING SYSTEMS FOR SIGNAL IMAGE AND VIDEO TECHNOLOGY, 2024, 96 (01): : 1 - 14
  • [46] Graph Attention Networks for Neural Social Recommendation
    Mu, Nan
    Zha, Daren
    He, Yuanye
    Tang, Zhihao
    2019 IEEE 31ST INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2019), 2019, : 1320 - 1327
  • [47] Understanding Attention and Generalization in Graph Neural Networks
    Knyazev, Boris
    Taylor, Graham W.
    Amer, Mohamed R.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [48] Heterogeneous graph attention networks for passage retrieval
    Lucas Albarede
    Philippe Mulhem
    Lorraine Goeuriot
    Sylvain Marié
    Claude Le Pape-Gardeux
    Trinidad Chardin-Segui
    Information Retrieval Journal, 2023, 26
  • [49] Heterogeneous graph attention networks for passage retrieval
    Albarede, Lucas
    Mulhem, Philippe
    Goeuriot, Lorraine
    Marie, Sylvain
    Le Pape-Gardeux, Claude
    Chardin-Segui, Trinidad
    INFORMATION RETRIEVAL JOURNAL, 2023, 26 (1-2):
  • [50] Entity Resolution with Hierarchical Graph Attention Networks
    Yao, Dezhong
    Gu, Yuhong
    Cong, Gao
    Jin, Hai
    Lv, Xinqiao
    PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA (SIGMOD '22), 2022, : 429 - 442