Integrating multi-modal information to detect spatial domains of spatial transcriptomics by graph attention network

被引:0
|
作者
Yuying Huo [1 ]
Yilang Guo [1 ]
Jiakang Wang [1 ]
Huijie Xue [1 ]
Yujuan Feng [2 ]
Weizheng Chen [3 ]
Xiangyu Li [1 ]
机构
[1] School of Software Engineering, Beijing Jiaotong University
[2] School of Software Engineering, Beijing University of Technology
[3] Baidu
关键词
D O I
暂无
中图分类号
Q811.4 [生物信息论];
学科分类号
0711 ; 0831 ;
摘要
Recent advances in spatially resolved transcriptomic technologies have enabled unprecedented opportunities to elucidate tissue architecture and function in situ. Spatial transcriptomics can provide multimodal and complementary information simultaneously, including gene expression profiles, spatial locations, and histology images. However, most existing methods have limitations in efficiently utilizing spatial information and matched high-resolution histology images. To fully leverage the multi-modal information, we propose a SPAtially embedded Deep Attentional graph Clustering(Spa DAC) method to identify spatial domains while reconstructing denoised gene expression profiles. This method can efficiently learn the low-dimensional embeddings for spatial transcriptomics data by constructing multi-view graph modules to capture both spatial location connectives and morphological connectives. Benchmark results demonstrate that Spa DAC outperforms other algorithms on several recent spatial transcriptomics datasets. Spa DAC is a valuable tool for spatial domain detection, facilitating the comprehension of tissue architecture and cellular microenvironment. The source code of Spa DAC is freely available at Github(https://github.com/huoyuying/Spa DAC.git).
引用
收藏
页码:720 / 733
页数:14
相关论文
共 50 条
  • [1] Integrating multi-modal information to detect spatial domains of spatial transcriptomics by graph attention network
    Huo, Yuying
    Guo, Yilang
    Wang, Jiakang
    Xue, Huijie
    Feng, Yujuan
    Chen, Weizheng
    Li, Xiangyu
    JOURNAL OF GENETICS AND GENOMICS, 2023, 50 (09) : 720 - 733
  • [2] Identifying spatial domains from spatial transcriptome by graph attention network
    Wu H.
    Gao J.
    Shengwu Yixue Gongchengxue Zazhi/Journal of Biomedical Engineering, 2024, 41 (02): : 246 - 252
  • [3] Adversarial Graph Attention Network for Multi-modal Cross-modal Retrieval
    Wu, Hongchang
    Guan, Ziyu
    Zhi, Tao
    zhao, Wei
    Xu, Cai
    Han, Hong
    Yang, Yarning
    2019 10TH IEEE INTERNATIONAL CONFERENCE ON BIG KNOWLEDGE (ICBK 2019), 2019, : 265 - 272
  • [4] Multi-modal spatial querying
    Egenhofer, MJ
    ADVANCES IN GIS RESEARCH II, 1997, : 785 - 799
  • [5] Integrated Heterogeneous Graph Attention Network for Incomplete Multi-modal Clustering
    Wang, Yu
    Yao, Xinjie
    Zhu, Pengfei
    Li, Weihao
    Cao, Meng
    Hu, Qinghua
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (09) : 3847 - 3866
  • [6] Graph deep learning enabled spatial domains identification for spatial transcriptomics
    Liu, Teng
    Fang, Zhao-Yu
    Li, Xin
    Zhang, Li-Ning
    Cao, Dong-Sheng
    Yin, Ming-Zhu
    BRIEFINGS IN BIOINFORMATICS, 2023, 24 (03)
  • [7] Multi-modal spatial relational attention networks for visual question answering
    Yao, Haibo
    Wang, Lipeng
    Cai, Chengtao
    Sun, Yuxin
    Zhang, Zhi
    Luo, Yongkang
    IMAGE AND VISION COMPUTING, 2023, 140
  • [8] Spatial-MGCN: a novel multi-view graph convolutional network for identifying spatial domains with attention mechanism
    Wang, Bo
    Luo, Jiawei
    Liu, Ying
    Shi, Wanwan
    Xiong, Zehao
    Shen, Cong
    Long, Yahui
    BRIEFINGS IN BIOINFORMATICS, 2023, 24 (05)
  • [9] Graspot: a graph attention network for spatial transcriptomics data integration with optimal transport
    Gao, Zizhan
    Cao, Kai
    Wan, Lin
    BIOINFORMATICS, 2024, 40 : ii137 - ii145
  • [10] Indescribable Multi-modal Spatial Evaluator
    Kong, Lingke
    Qi, X. Sharon
    Shen, Qijin
    Wang, Jiacheng
    Zhang, Jingyi
    Hu, Yanle
    Zhou, Qichao
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 9853 - 9862