Optimized Convolutional Fusion for Multimodal Neuroimaging in Alzheimer's Disease Diagnosis: Enhancing Data Integration and Feature Extraction

被引:7
|
作者
Odusami, Modupe [1 ]
Maskeliunas, Rytis [1 ]
Damasevicius, Robertas [2 ]
机构
[1] Kaunas Univ Technol, Dept Multimedia Engn, LT-51423 Kaunas, Lithuania
[2] Vytautas Magnus Univ, Dept Appl Informat, LT-53361 Kaunas, Lithuania
来源
JOURNAL OF PERSONALIZED MEDICINE | 2023年 / 13卷 / 10期
关键词
Alzheimer's disease; data integration; feature extraction; multimodal neuroimaging; optimized convolution;
D O I
10.3390/jpm13101496
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
Multimodal neuroimaging has gained traction in Alzheimer's Disease (AD) diagnosis by integrating information from multiple imaging modalities to enhance classification accuracy. However, effectively handling heterogeneous data sources and overcoming the challenges posed by multiscale transform methods remains a significant hurdle. This article proposes a novel approach to address these challenges. To harness the power of diverse neuroimaging data, we employ a strategy that leverages optimized convolution techniques. These optimizations include varying kernel sizes and the incorporation of instance normalization, both of which play crucial roles in feature extraction from magnetic resonance imaging (MRI) and positron emission tomography (PET) images. Specifically, varying kernel sizes allow us to adapt the receptive field to different image characteristics, enhancing the model's ability to capture relevant information. Furthermore, we employ transposed convolution, which increases spatial resolution of feature maps, and it is optimized with varying kernel sizes and instance normalization. This heightened resolution facilitates the alignment and integration of data from disparate MRI and PET data. The use of larger kernels and strides in transposed convolution expands the receptive field, enabling the model to capture essential cross-modal relationships. Instance normalization, applied to each modality during the fusion process, mitigates potential biases stemming from differences in intensity, contrast, or scale between modalities. This enhancement contributes to improved model performance by reducing complexity and ensuring robust fusion. The performance of the proposed fusion method is assessed on three distinct neuroimaging datasets, which include: Alzheimer's Disease Neuroimaging Initiative (ADNI), consisting of 50 participants each at various stages of AD for both MRI and PET (Cognitive Normal, AD, and Early Mild Cognitive); Open Access Series of Imaging Studies (OASIS), consisting of 50 participants each at various stages of AD for both MRI and PET (Cognitive Normal, Mild Dementia, Very Mild Dementia); and whole-brain atlas neuroimaging (AANLIB) (consisting of 50 participants each at various stages of AD for both MRI and PET (Cognitive Normal, AD). To evaluate the quality of the fused images generated via our method, we employ a comprehensive set of evaluation metrics, including Structural Similarity Index Measurement (SSIM), which assesses the structural similarity between two images; Peak Signal-to-Noise Ratio (PSNR), which measures how closely the generated image resembles the ground truth; Entropy (E), which assesses the amount of information preserved or lost during fusion; the Feature Similarity Indexing Method (FSIM), which assesses the structural and feature similarities between two images; and Edge-Based Similarity (EBS), which measures the similarity of edges between the fused and ground truth images. The obtained fused image is further evaluated using a Mobile Vision Transformer. In the classification of AD vs. Cognitive Normal, the model achieved an accuracy of 99.00%, specificity of 99.00%, and sensitivity of 98.44% on the AANLIB dataset.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] Alzheimer's disease multiclass diagnosis via multimodal neuroimaging embedding feature selection and fusion
    Zhang, Yuanpeng
    Wang, Shuihua
    Xia, Kaijian
    Jiang, Yizhang
    Qian, Pengjiang
    INFORMATION FUSION, 2021, 66 : 170 - 183
  • [2] Multimodal Neuroimaging Feature Learning for Multiclass Diagnosis of Alzheimer's Disease
    Liu, Siqi
    Liu, Sidong
    Cai, Weidong
    Che, Hangyu
    Pujol, Sonia
    Kikinis, Ron
    Feng, Dagan
    Fulham, Michael J.
    IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2015, 62 (04) : 1132 - 1140
  • [3] Alzheimer's disease diagnosis via multimodal feature fusion
    Tu, Yue
    Lin, Shukuan
    Qiao, Jianzhong
    Zhuang, Yilin
    Zhang, Peng
    COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 148
  • [4] Multi-modal neuroimaging feature fusion for diagnosis of Alzheimer's disease
    Zhang, Tao
    Shi, Mingyang
    JOURNAL OF NEUROSCIENCE METHODS, 2020, 341
  • [5] Multimodal Neuroimaging Feature Learning With Multimodal Stacked Deep Polynomial Networks for Diagnosis of Alzheimer's Disease
    Shi, Jun
    Zheng, Xiao
    Li, Yan
    Zhang, Qi
    Ying, Shihui
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2018, 22 (01) : 173 - 183
  • [6] Deep learning and multimodal feature fusion for the aided diagnosis of Alzheimer's disease
    Jia, Hongfei
    Lao, Huan
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (22): : 19585 - 19598
  • [7] Deep learning and multimodal feature fusion for the aided diagnosis of Alzheimer's disease
    Hongfei Jia
    Huan Lao
    Neural Computing and Applications, 2022, 34 : 19585 - 19598
  • [8] Deep joint learning diagnosis of Alzheimer's disease based on multimodal feature fusion
    Wang, Jingru
    Wen, Shipeng
    Liu, Wenjie
    Meng, Xianglian
    Jiao, Zhuqing
    BIODATA MINING, 2024, 17 (01):
  • [9] Assisted Diagnosis of Alzheimer's Disease Based on Deep Learning and Multimodal Feature Fusion
    Wang, Yu
    Liu, Xi
    Yu, Chongchong
    COMPLEXITY, 2021, 2021
  • [10] Classification and diagnosis model for Alzheimer's disease based on multimodal data fusion
    Fu, Yaqin
    Xu, Lin
    Zhang, Yujie
    Zhang, Linshuai
    Zhang, Pengfei
    Cao, Lu
    Jiang, Tao
    MEDICINE, 2024, 103 (52)