Multi-Modal Non-Euclidean Brain Network Analysis With Community Detection and Convolutional Autoencoder

被引:8
|
作者
Zhu, Qi [1 ]
Yang, Jing [1 ]
Wang, Shuihua [2 ]
Zhang, Daoqiang [1 ]
Zhang, Zheng [3 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing 211106, Peoples R China
[2] Univ Leicester, Sch Math & Actuarial Sci, Leicester LE2 4SN, Leics, England
[3] Harbin Inst Technol, Peng Cheng Lab, Shenzhen 518055, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Network analyzers; Brain modeling; Kernel; Deep learning; Functional magnetic resonance imaging; Diffusion tensor imaging; Convolutional codes; Brain network analysis; community detection; convolutional autoencoder; structural modal; functional modal; FUNCTIONAL CONNECTIVITY; CONSTRUCTION; STRENGTH;
D O I
10.1109/TETCI.2022.3171855
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Brain network analysis is one of the most effective methods for brain disease diagnosis. Existing studies have shown that exploring information from multimodal data is a valuable way to improve the effectiveness of brain network analysis. In recent years, deep learning has received more and more attention due to its powerful feature learning capabilities, and it is natural to introduce this tool into multi-modal brain networks analysis. However, it would face two challenges. One is that brain network is in non-Euclidean domain, so the convolution kernel in deep learning cannot be directly used to brain networks. The other is that most of existing multi-modal brain network analysis methods cannot extract full use of complementary information from distinct modalities. In this paper, we propose a multi-modal non-Euclidean brain network analysis method based on community detection and convolutional autoencoder, which can solve the above two problems simultaneously in one framework (M2CDCA). First, we construct the functional and structural brain network, respectively. Second, we design a multi-modal interactive community detection method that exploits structural modality to guide functional modality for detecting community structure, then readjusts the nodes distribution so that the adjusted brain network can preserve the potential community information and is more suitable for convolution kernels. Finally, we design a dual-channel autoencoder model with self-attention mechanism to capture hierarchical and highly non-linear features, then comprehensively use two modalities information for classification. We evaluate our method on an epilepsy dataset, the experimental results show that our method outperform several state-of-the-art methods.
引用
收藏
页码:436 / 446
页数:11
相关论文
共 50 条
  • [1] NON-EUCLIDEAN, CONVOLUTIONAL LEARNING ON CORTICAL BRAIN SURFACES
    Mostapha, Mahmoud
    Kim, SunHyung
    Wu, Guorong
    Zsembik, Leo
    Pizer, Stephen
    Styner, Martin
    [J]. 2018 IEEE 15TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2018), 2018, : 527 - 530
  • [2] Multi-resolutional Brain Network Filtering and Analysis via Wavelets on Non-Euclidean Space
    Kim, Won Hwa
    Adluru, Nagesh
    Chung, Moo K.
    Charchut, Sylvia
    GadElkarim, Johnson J.
    Altshuler, Lori
    Moody, Teena
    Kumar, Anand
    Singh, Vikas
    Leow, Alex D.
    [J]. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION (MICCAI 2013), PT III, 2013, 8151 : 643 - 651
  • [3] Multi-Modal Sarcasm Detection via Cross-Modal Graph Convolutional Network
    Liang, Bin
    Lou, Chenwei
    Li, Xiang
    Yang, Min
    Gui, Lin
    He, Yulan
    Pei, Wenjie
    Xu, Ruifeng
    [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 1767 - 1777
  • [4] Joint Laplacian Diagonalization for Multi-Modal Brain Community Detection
    Dodero, Luca
    Murino, Vittorio
    Sona, Diego
    [J]. 2014 INTERNATIONAL WORKSHOP ON PATTERN RECOGNITION IN NEUROIMAGING, 2014,
  • [5] A Parallel Community Detection in Multi-Modal Social Network With Apache Spark
    Cho, Yoon-Sik
    [J]. IEEE ACCESS, 2019, 7 : 27465 - 27478
  • [6] Multi-modal Brain Segmentation Using Hyper-Fused Convolutional Neural Network
    Duan, Wenting
    Zhang, Lei
    Colman, Jordan
    Gulli, Giosue
    Ye, Xujiong
    [J]. MACHINE LEARNING IN CLINICAL NEUROIMAGING, 2021, 13001 : 82 - 91
  • [7] Investigation of an efficient multi-modal convolutional neural network for multiple sclerosis lesion detection
    Raab, Florian
    Malloni, Wilhelm
    Wein, Simon
    Greenlee, Mark W.
    Lang, Elmar W.
    [J]. SCIENTIFIC REPORTS, 2023, 13 (01)
  • [8] Investigation of an efficient multi-modal convolutional neural network for multiple sclerosis lesion detection
    Florian Raab
    Wilhelm Malloni
    Simon Wein
    Mark W. Greenlee
    Elmar W. Lang
    [J]. Scientific Reports, 13
  • [9] A coupled autoencoder approach for multi-modal analysis of cell types
    Gala, Rohan
    Gouwens, Nathan
    Yao, Zizhen
    Budzillo, Agata
    Penn, Osnat
    Tasic, Bosiljka
    Murphy, Gabe
    Zeng, Hongkui
    Sumbul, Uygar
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [10] A Multi-Modal Transformer network for action detection
    Korban, Matthew
    Youngs, Peter
    Acton, Scott T.
    [J]. PATTERN RECOGNITION, 2023, 142