Enhancing Multimodal Patterns in Neuroimaging by Siamese Neural Networks with Self-Attention Mechanism

被引:13
|
作者
Arco, Juan E. [1 ,2 ,3 ]
Ortiz, Andres [2 ,3 ]
Gallego-Molina, Nicolas J. [2 ,3 ]
Gorriz, Juan M. [1 ,3 ]
Ramirez, Javier [1 ,3 ]
机构
[1] Univ Granada, Dept Signal Theory Networking & Commun, Granada 18010, Spain
[2] Univ Malaga, Dept Commun Engn, Malaga 29010, Spain
[3] Andalusian Res Inst Data Sci & Computat Intellige, Granada, Spain
关键词
Multimodal combination; siamese neural network; self-attention; deep learning; medical imaging; ALZHEIMERS-DISEASE; FUNCTIONAL CONNECTIVITY; MATTER LOSS; DIAGNOSIS; FUSION; MULTISCALE; MRI;
D O I
10.1142/S0129065723500193
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The combination of different sources of information is currently one of the most relevant aspects in the diagnostic process of several diseases. In the field of neurological disorders, different imaging modalities providing structural and functional information are frequently available. Those modalities are usually analyzed separately, although a joint of the features extracted from both sources can improve the classification performance of Computer-Aided Diagnosis (CAD) tools. Previous studies have computed independent models from each individual modality and combined them in a subsequent stage, which is not an optimum solution. In this work, we propose a method based on the principles of siamese neural networks to fuse information from Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). This framework quantifies the similarities between both modalities and relates them with the diagnostic label during the training process. The resulting latent space at the output of this network is then entered into an attention module in order to evaluate the relevance of each brain region at different stages of the development of Alzheimer's disease. The excellent results obtained and the high flexibility of the method proposed allow fusing more than two modalities, leading to a scalable methodology that can be used in a wide range of contexts.
引用
收藏
页数:18
相关论文
共 50 条
  • [11] Combining Contextual Information by Self-attention Mechanism in Convolutional Neural Networks for Text Classification
    Wu, Xin
    Cai, Yi
    Li, Qing
    Xu, Jingyun
    Leung, Ho-fung
    WEB INFORMATION SYSTEMS ENGINEERING, WISE 2018, PT I, 2018, 11233 : 453 - 467
  • [12] Automatic Food Recognition Using Deep Convolutional Neural Networks with Self-attention Mechanism
    Rahib Abiyev
    Joseph Adepoju
    Human-Centric Intelligent Systems, 2024, 4 (1): : 171 - 186
  • [13] On the Global Self-attention Mechanism for Graph Convolutional Networks
    Wang, Chen
    Deng, Chengyuan
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 8531 - 8538
  • [14] Exploiting the Self-Attention Mechanism in Gas Sensor Array (GSA) Data With Neural Networks
    Wang, Ningning
    Li, Silong
    Ye, Terry Tao
    IEEE SENSORS JOURNAL, 2023, 23 (06) : 5988 - 5996
  • [15] Enhancing concrete defect segmentation using multimodal data and Siamese Neural Networks
    Pozzer, Sandra
    Ramos, Gabriel
    Azar, Ehsan Rezazadeh
    Osman, Ahmad
    El Refai, Ahmed
    Lopez, Fernando
    Ibarra-Castanedo, Clemente
    Maldague, Xavier
    AUTOMATION IN CONSTRUCTION, 2024, 166
  • [16] A finger vein authentication method based on the lightweight Siamese network with the self-attention mechanism
    Fang, Chunxin
    Ma, Hui
    Li, Jianian
    INFRARED PHYSICS & TECHNOLOGY, 2023, 128
  • [17] Multi-subspace self-attention siamese networks for fault diagnosis with limited data
    Zhang, Xue
    Chen, Yongyi
    Ni, Hongjie
    Zhang, Dan
    Abdulaal, Mohammed
    SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (03) : 2465 - 2472
  • [18] Multi-subspace self-attention siamese networks for fault diagnosis with limited data
    Xue Zhang
    Yongyi Chen
    Hongjie Ni
    Dan Zhang
    Mohammed Abdulaal
    Signal, Image and Video Processing, 2024, 18 : 2465 - 2472
  • [19] Neural Named Entity Recognition Using a Self-Attention Mechanism
    Zukov-Gregoric, Andrej
    Bachrach, Yoram
    Minkovsky, Pasha
    Coope, Sam
    Maksak, Bogdan
    2017 IEEE 29TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2017), 2017, : 652 - 656
  • [20] Real world image tampering localization combining the self-attention mechanism and convolutional neural networks
    Zhong H.
    Bian S.
    Wang C.
    Xi'an Dianzi Keji Daxue Xuebao/Journal of Xidian University, 2024, 51 (01): : 135 - 146