FEFA: Frequency Enhanced Multi-Modal MRI Reconstruction With Deep Feature Alignment

被引:0
|
作者
Chen, Xuanmin [1 ]
Ma, Liyan [1 ,2 ]
Ying, Shihui [3 ]
Shen, Dinggang [4 ,5 ]
Zeng, Tieyong [6 ]
机构
[1] Shanghai Univ, Sch Comp Engn & Sci, Shanghai 200444, Peoples R China
[2] Shanghai Univ, Schoolof Mechatron Engn & Automat, Shanghai Key Laboratoryof Intelligent Mfg & Robot, Shanghai 200444, Peoples R China
[3] Shanghai Univ, Sch Sci, Dept Math, Shanghai 200444, Peoples R China
[4] ShanghaiTech Univ, Sch Biomed Engn, Shanghai 201210, Peoples R China
[5] Shanghai United Imaging Intelligence Co Ltd, Shanghai 200030, Peoples R China
[6] Chinese Univ Hong Kong, Ctr Math Artificial Intelligence, Dept Math, Hong Kong, Peoples R China
基金
国家重点研发计划;
关键词
Image reconstruction; Magnetic resonance imaging; Convolution; Training; Imaging; Frequency-domain analysis; Compressed sensing; MRI reconstruction; multi-modal feature alignment; feature refinement; IMAGE-RECONSTRUCTION; NEURAL-NETWORK; CONTRAST MRI; TRANSFORMER;
D O I
10.1109/JBHI.2024.3432139
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Integrating complementary information from multiple magnetic resonance imaging (MRI) modalities is often necessary to make accurate and reliable diagnostic decisions. However, the different acquisition speeds of these modalities mean that obtaining information can be time consuming and require significant effort. Reference-based MRI reconstruction aims to accelerate slower, under-sampled imaging modalities, such as T2-modality, by utilizing redundant information from faster, fully sampled modalities, such as T1-modality. Unfortunately, spatial misalignment between different modalities often negatively impacts the final results. To address this issue, we propose FEFA, which consists of cascading FEFA blocks. The FEFA block first aligns and fuses the two modalities at the feature level. The combined features are then filtered in the frequency domain to enhance the important features while simultaneously suppressing the less essential ones, thereby ensuring accurate reconstruction. Furthermore, we emphasize the advantages of combining the reconstruction results from multiple cascaded blocks, which also contributes to stabilizing the training process. Compared to existing registration-then-reconstruction and cross-attention-based approaches, our method is end-to-end trainable without requiring additional supervision, extensive parameters, or heavy computation. Experiments on the public fastMRI, IXI and in-house datasets demonstrate that our approach is effective across various under-sampling patterns and ratios.
引用
收藏
页码:6751 / 6763
页数:13
相关论文
共 50 条
  • [1] Deep unfolding network with spatial alignment for multi-modal MRI reconstruction
    Zhang, Hao
    Wang, Qi
    Shi, Jun
    Ying, Shihui
    Wen, Zhijie
    MEDICAL IMAGE ANALYSIS, 2025, 99
  • [2] Adaptive Feature Fusion for Multi-modal Entity Alignment
    Guo H.
    Li X.-Y.
    Tang J.-Y.
    Guo Y.-M.
    Zhao X.
    Zidonghua Xuebao/Acta Automatica Sinica, 2024, 50 (04): : 758 - 770
  • [3] Multi-Modal Entity Alignment Method Based on Feature Enhancement
    Wang, Huansha
    Liu, Qinrang
    Huang, Ruiyang
    Zhang, Jianpeng
    APPLIED SCIENCES-BASEL, 2023, 13 (11):
  • [4] Enhanced Entity Interaction Modeling for Multi-Modal Entity Alignment
    Li, Jinxu
    Zhou, Qian
    Chen, Wei
    Zhao, Lei
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT II, KSEM 2023, 2023, 14118 : 214 - 227
  • [5] Deep learning-based multi-modal computing with feature disentanglement for MRI image synthesis
    Fei, Yuchen
    Zhan, Bo
    Hong, Mei
    Wu, Xi
    Zhou, Jiliu
    Wang, Yan
    MEDICAL PHYSICS, 2021, 48 (07) : 3778 - 3789
  • [6] Robust Domain Misinformation Detection via Multi-Modal Feature Alignment
    Liu, Hui
    Wang, Wenya
    Sun, Hao
    Rocha, Anderson
    Li, Haoliang
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 793 - 806
  • [7] Semi-supervised Grounding Alignment for Multi-modal Feature Learning
    Chou, Shih-Han
    Fan, Zicong
    Little, James J.
    Sigal, Leonid
    2022 19TH CONFERENCE ON ROBOTS AND VISION (CRV 2022), 2022, : 48 - 57
  • [8] Multi-modal deep learning architecture for enhanced feature extraction and classification of imagined speech words
    Mohan, Anand
    Anand, R. S.
    ENGINEERING RESEARCH EXPRESS, 2025, 7 (01):
  • [9] Multi-Modal Knowledge Graph Transformer Framework for Multi-Modal Entity Alignment
    Li, Qian
    Ji, Cheng
    Guo, Shu
    Liang, Zhaoji
    Wang, Lihong
    Li, Jianxin
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 987 - 999
  • [10] Structured Multi-modal Feature Embedding and Alignment for Image-Sentence Retrieval
    Ge, Xuri
    Chen, Fuhai
    Jose, Joemon M.
    Ji, Zhilong
    Wu, Zhongqin
    Liu, Xiao
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 5185 - 5193