Sparse graphs-based dynamic attention networks

被引:0
|
作者
Chen, Runze [1 ]
Lin, Kaibiao [1 ]
Hong, Binsheng [1 ]
Zhang, Shandan [1 ]
Yang, Fan [2 ]
机构
[1] Xiamen Univ Technol, Dept Comp Sci & Technol, Xiamen 361024, Peoples R China
[2] Xiamen Univ, Dept Automat, Xiamen 361005, Peoples R China
关键词
Graph neural networks; Graph attention networks; Sparse graphs; Dynamic attention;
D O I
10.1016/j.heliyon.2024.e35938
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
In previous research, the prevailing assumption was that Graph Neural Networks (GNNs) precisely depicted the interconnections among nodes within the graph's architecture. Nonetheless, real-world graph datasets are often rife with noise, elements that can disseminate through the network and ultimately affect the outcome of the downstream tasks. Facing the complex fabric of real-world graphs and the myriad potential disturbances, we introduce the Sparse Graph Dynamic Attention Networks (SDGAT) in this research. SDGAT employs the L-0 regularization technique to achieve a sparse representation of the graph structure, which eliminates noise and generates a more concise sparse graph. Building upon this foundation, the model integrates a dynamic attention mechanism, allowing it to selectively focus on key nodes and edges, filter out irrelevant data, and simultaneously facilitate effective feature aggregation with important neighbors. To evaluate the performance of SDGAT, we conducted experiments on three citation datasets and compared its performance against commonly employed models. The outcomes indicate that SDGAT excels in node classification tasks, notably on the Cora dataset, with an accuracy rate of 85.29%, marking a roughly 3% enhancement over the majority of baseline models. The experimental findings provide evidence that SDGAT delivers effective performance on all three citation datasets, underscoring the efficacy of the dynamic attention network built upon a sparse graph.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] DySAT: Deep Neural Representation Learning on Dynamic Graphs via Self-Attention Networks
    Sankar, Aravind
    Wu, Yanhong
    Gou, Liang
    Zhang, Wei
    Yang, Hao
    PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM '20), 2020, : 519 - 527
  • [32] Link prediction for knowledge graphs based on extended relational graph attention networks
    Cao, Zhanyue
    Luo, Chao
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 259
  • [33] An Attention-Based Model for Learning Dynamic Interaction Networks
    Cavallari, Sandro
    Poria, Soujanya
    Cambria, Erik
    Zheng, Vincent W.
    Cai, Hongyun
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [34] Optimal r-dynamic coloring of sparse graphs
    Yi, Dan
    Zhu, Junlei
    Feng, Lixia
    Wang, Jiaxin
    Yang, Mengyini
    JOURNAL OF COMBINATORIAL OPTIMIZATION, 2019, 38 (02) : 545 - 555
  • [35] Optimal r-dynamic coloring of sparse graphs
    Dan Yi
    Junlei Zhu
    Lixia Feng
    Jiaxin Wang
    Mengyini Yang
    Journal of Combinatorial Optimization, 2019, 38 : 545 - 555
  • [36] Sparse Triangular Decomposition for Computing Equilibria of Biological Dynamic Systems Based on Chordal Graphs
    Mou, Chenqi
    Ju, Wenwen
    IEEE-ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS, 2023, 20 (03) : 1667 - 1678
  • [37] Streaming Sparse Graphs using Efficient Dynamic Sets
    Wheatman, Brian
    Burns, Randal
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 284 - 294
  • [38] List r-dynamic coloring of sparse graphs
    Zhu, Junlei
    Bu, Yuehua
    THEORETICAL COMPUTER SCIENCE, 2020, 817 (817) : 1 - 11
  • [39] Joint event extraction model based on dynamic attention matching and graph attention networks
    Jiajun Cheng
    Wenjie Liu
    Zhifan Wang
    Zhijie Ren
    Xingwen Li
    Scientific Reports, 15 (1)
  • [40] Algorithms based on the treewidth of sparse graphs
    Kneis, J
    Mölle, D
    Richter, S
    Rossmanith, P
    GRAPH-THEORETIC CONCEPTS IN COMPUTER SCIENCE, 2005, 3787 : 385 - 396