FEFA: Frequency Enhanced Multi-Modal MRI Reconstruction With Deep Feature Alignment

被引:0
|
作者
Chen, Xuanmin [1 ]
Ma, Liyan [1 ,2 ]
Ying, Shihui [3 ]
Shen, Dinggang [4 ,5 ]
Zeng, Tieyong [6 ]
机构
[1] Shanghai Univ, Sch Comp Engn & Sci, Shanghai 200444, Peoples R China
[2] Shanghai Univ, Schoolof Mechatron Engn & Automat, Shanghai Key Laboratoryof Intelligent Mfg & Robot, Shanghai 200444, Peoples R China
[3] Shanghai Univ, Sch Sci, Dept Math, Shanghai 200444, Peoples R China
[4] ShanghaiTech Univ, Sch Biomed Engn, Shanghai 201210, Peoples R China
[5] Shanghai United Imaging Intelligence Co Ltd, Shanghai 200030, Peoples R China
[6] Chinese Univ Hong Kong, Ctr Math Artificial Intelligence, Dept Math, Hong Kong, Peoples R China
基金
国家重点研发计划;
关键词
Image reconstruction; Magnetic resonance imaging; Convolution; Training; Imaging; Frequency-domain analysis; Compressed sensing; MRI reconstruction; multi-modal feature alignment; feature refinement; IMAGE-RECONSTRUCTION; NEURAL-NETWORK; CONTRAST MRI; TRANSFORMER;
D O I
10.1109/JBHI.2024.3432139
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Integrating complementary information from multiple magnetic resonance imaging (MRI) modalities is often necessary to make accurate and reliable diagnostic decisions. However, the different acquisition speeds of these modalities mean that obtaining information can be time consuming and require significant effort. Reference-based MRI reconstruction aims to accelerate slower, under-sampled imaging modalities, such as T2-modality, by utilizing redundant information from faster, fully sampled modalities, such as T1-modality. Unfortunately, spatial misalignment between different modalities often negatively impacts the final results. To address this issue, we propose FEFA, which consists of cascading FEFA blocks. The FEFA block first aligns and fuses the two modalities at the feature level. The combined features are then filtered in the frequency domain to enhance the important features while simultaneously suppressing the less essential ones, thereby ensuring accurate reconstruction. Furthermore, we emphasize the advantages of combining the reconstruction results from multiple cascaded blocks, which also contributes to stabilizing the training process. Compared to existing registration-then-reconstruction and cross-attention-based approaches, our method is end-to-end trainable without requiring additional supervision, extensive parameters, or heavy computation. Experiments on the public fastMRI, IXI and in-house datasets demonstrate that our approach is effective across various under-sampling patterns and ratios.
引用
收藏
页码:6751 / 6763
页数:13
相关论文
共 50 条
  • [21] Multi-modal entity alignment in hyperbolic space
    Guo, Hao
    Tang, Jiuyang
    Zeng, Weixin
    Zhao, Xiang
    Liu, Li
    NEUROCOMPUTING, 2021, 461 : 598 - 607
  • [22] Multi-modal Entity Alignment via Position-enhanced Multi-label Propagation
    Tang, Wei
    Wang, Yuanyi
    PROCEEDINGS OF THE 4TH ANNUAL ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2024, 2024, : 366 - 375
  • [23] Multization: Multi-Modal Summarization Enhanced by Multi-Contextually Relevant and Irrelevant Attention Alignment
    Rong, Huan
    Chen, Zhongfeng
    Lu, Zhenyu
    Xu, Fan
    Sheng, Victor S.
    ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2024, 23 (05)
  • [24] CARAT: Contrastive Feature Reconstruction and Aggregation for Multi-Modal Multi-Label Emotion Recognition
    Peng, Cheng
    Chen, Ke
    Shou, Lidan
    Chen, Gang
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 13, 2024, : 14581 - 14589
  • [25] A Novel Deep Multi-Modal Feature Fusion Method for Celebrity Video Identification
    Chen, Jianrong
    Yang, Li
    Xu, Yuanyuan
    Huo, Jing
    Shi, Yinghuan
    Gao, Yang
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 2535 - 2538
  • [26] Multi-modal deep feature learning for RGB-D object detection
    Xu, Xiangyang
    Li, Yuncheng
    Wu, Gangshan
    Luo, Jiebo
    PATTERN RECOGNITION, 2017, 72 : 300 - 313
  • [27] Electromagnetic signal feature fusion and recognition based on multi-modal deep learning
    Hou C.
    Zhang X.
    Chen X.
    International Journal of Performability Engineering, 2020, 16 (06): : 941 - 949
  • [28] Multi-modal Fundus Image Registration with Deep Feature Matching and Image Scaling
    Kim, Ju-Chan
    Le, Duc-Tai
    Song, Su Jeong
    Son, Chang-Hwan
    Choo, Hyunseung
    PROCEEDINGS OF THE 2022 16TH INTERNATIONAL CONFERENCE ON UBIQUITOUS INFORMATION MANAGEMENT AND COMMUNICATION (IMCOM 2022), 2022,
  • [29] Deep Feature Correlation Learning for Multi-Modal Remote Sensing Image Registration
    Quan, Dou
    Wang, Shuang
    Gu, Yu
    Lei, Ruiqi
    Yang, Bowu
    Wei, Shaowei
    Hou, Biao
    Jiao, Licheng
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [30] Multi-modal facial expression feature based on deep-neural networks
    Wei, Wei
    Jia, Qingxuan
    Feng, Yongli
    Chen, Gang
    Chu, Ming
    JOURNAL ON MULTIMODAL USER INTERFACES, 2020, 14 (01) : 17 - 23