Multi-Modal Non-Euclidean Brain Network Analysis With Community Detection and Convolutional Autoencoder

被引:8
|
作者
Zhu, Qi [1 ]
Yang, Jing [1 ]
Wang, Shuihua [2 ]
Zhang, Daoqiang [1 ]
Zhang, Zheng [3 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing 211106, Peoples R China
[2] Univ Leicester, Sch Math & Actuarial Sci, Leicester LE2 4SN, Leics, England
[3] Harbin Inst Technol, Peng Cheng Lab, Shenzhen 518055, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Network analyzers; Brain modeling; Kernel; Deep learning; Functional magnetic resonance imaging; Diffusion tensor imaging; Convolutional codes; Brain network analysis; community detection; convolutional autoencoder; structural modal; functional modal; FUNCTIONAL CONNECTIVITY; CONSTRUCTION; STRENGTH;
D O I
10.1109/TETCI.2022.3171855
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Brain network analysis is one of the most effective methods for brain disease diagnosis. Existing studies have shown that exploring information from multimodal data is a valuable way to improve the effectiveness of brain network analysis. In recent years, deep learning has received more and more attention due to its powerful feature learning capabilities, and it is natural to introduce this tool into multi-modal brain networks analysis. However, it would face two challenges. One is that brain network is in non-Euclidean domain, so the convolution kernel in deep learning cannot be directly used to brain networks. The other is that most of existing multi-modal brain network analysis methods cannot extract full use of complementary information from distinct modalities. In this paper, we propose a multi-modal non-Euclidean brain network analysis method based on community detection and convolutional autoencoder, which can solve the above two problems simultaneously in one framework (M2CDCA). First, we construct the functional and structural brain network, respectively. Second, we design a multi-modal interactive community detection method that exploits structural modality to guide functional modality for detecting community structure, then readjusts the nodes distribution so that the adjusted brain network can preserve the potential community information and is more suitable for convolution kernels. Finally, we design a dual-channel autoencoder model with self-attention mechanism to capture hierarchical and highly non-linear features, then comprehensively use two modalities information for classification. We evaluate our method on an epilepsy dataset, the experimental results show that our method outperform several state-of-the-art methods.
引用
收藏
页码:436 / 446
页数:11
相关论文
共 50 条
  • [31] Analysis of multi-modal brain signals in awake mice
    Stopper, G.
    Stopper, L.
    Caudal, L. C.
    Scheller, A.
    Kirchhoff, F.
    [J]. GLIA, 2019, 67 : E175 - E175
  • [32] Bayesian analysis of multi-modal data and brain imaging
    Assadi, A
    Eghbalnia, H
    Backonja, M
    Wakai, R
    Rutecki, P
    Haughton, V
    [J]. MEDICAL IMAGING 2000: IMAGE PROCESSING, PTS 1 AND 2, 2000, 3979 : 1160 - 1167
  • [33] MKGCN: Multi-Modal Knowledge Graph Convolutional Network for Music Recommender Systems
    Cui, Xiaohui
    Qu, Xiaolong
    Li, Dongmei
    Yang, Yu
    Li, Yuxun
    Zhang, Xiaoping
    [J]. ELECTRONICS, 2023, 12 (12)
  • [34] Multi-Modal Fusion Object Tracking Based on Fully Convolutional Siamese Network
    Qi, Ke
    Chen, Liji
    Zhou, Yicong
    Qi, Yutao
    [J]. 2023 2ND ASIA CONFERENCE ON ALGORITHMS, COMPUTING AND MACHINE LEARNING, CACML 2023, 2023, : 440 - 444
  • [35] Multi-modal Fusion Network for Rumor Detection with Texts and Images
    Li, Boqun
    Qian, Zhong
    Li, Peifeng
    Zhu, Qiaoming
    [J]. MULTIMEDIA MODELING (MMM 2022), PT I, 2022, 13141 : 15 - 27
  • [36] Multi-Modal Interaction Graph Convolutional Network for Temporal Language Localization in Videos
    Zhang, Zongmeng
    Han, Xianjing
    Song, Xuemeng
    Yan, Yan
    Nie, Liqiang
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 8265 - 8277
  • [37] Cerebral aneurysm image segmentation based on multi-modal convolutional neural network
    Meng, Chengjie
    Yang, Debiao
    Chen, Dan
    [J]. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2021, 208
  • [38] PolSAR Image Classification Based on Multi-Modal Contrastive Fully Convolutional Network
    Hua, Wenqiang
    Wang, Yi
    Yang, Sijia
    Jin, Xiaomin
    [J]. REMOTE SENSING, 2024, 16 (02)
  • [39] A Multi-modal Convolutional Neural Network Framework for the Prediction of Alzheimer's Disease
    Spasov, Simeon E.
    Passamonti, Luca
    Duggento, Andrea
    Lio, Pietro
    Toschi, Nicola
    [J]. 2018 40TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC), 2018, : 1271 - 1274
  • [40] EDNet: A Mesoscale Eddy Detection Network with Multi-Modal Data
    Fan, Zhenlin
    Zhong, Guoqiang
    Wei, Hongxu
    Li, Haitao
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,