Accurate Multi-contrast MRI Super-Resolution via a Dual Cross-Attention Transformer Network

被引:4
|
作者
Huang, Shoujin [1 ]
Li, Jingyu [1 ]
Mei, Lifeng [1 ]
Zhang, Tan [1 ]
Chen, Ziran [1 ]
Dong, Yu [2 ]
Dong, Linzheng [2 ]
Liu, Shaojun [1 ]
Lyu, Mengye [1 ]
机构
[1] Shenzhen Technol Univ, Shenzhen, Peoples R China
[2] Shenzhen Samii Med Ctr, Shenzhen, Peoples R China
基金
中国国家自然科学基金;
关键词
Magnetic resonance imaging; Super-resolution; Multi-contrast; RESOLUTION;
D O I
10.1007/978-3-031-43999-5_30
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Magnetic Resonance Imaging (MRI) is a critical imaging tool in clinical diagnosis, but obtaining high-resolution MRI images can be challenging due to hardware and scan time limitations. Recent studies have shown that using reference images from multi-contrast MRI data could improve super-resolution quality. However, the commonly employed strategies, e.g., channel concatenation or hard-attention based texture transfer, may not be optimal given the visual differences between multi-contrast MRI images. To address these limitations, we propose a new Dual Cross-Attention Multi-contrast Super Resolution (DCAMSR) framework. This approach introduces a dual cross-attention transformer architecture, where the features of the reference image and the upsampled input image are extracted and promoted with both spatial and channel attention in multiple resolutions. Unlike existing hard-attention based methods where only the most correlated features are sought via the highly down-sampled reference images, the proposed architecture is more powerful to capture and fuse the shareable information between the multi-contrast images. Extensive experiments are conducted on fastMRI knee data at high field and more challenging brain data at low field, demonstrating that DCAMSR can substantially outperform the state-of-the-art single-image and multi-contrast MRI super-resolution methods, and even remains robust in a self-referenced manner. The code for DCAMSR is avaliable at https://github.com/Solor-pikachu/DCAMSR.
引用
收藏
页码:313 / 322
页数:10
相关论文
共 50 条
  • [31] Task Transformer Network for Joint MRI Reconstruction and Super-Resolution
    Feng, Chun-Mei
    Yan, Yunlu
    Fu, Huazhu
    Chen, Li
    Xu, Yong
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT VI, 2021, 12906 : 307 - 317
  • [32] A CNN-transformer hybrid network with selective fusion and dual attention for image super-resolution
    Zhang, Chun
    Wang, Jin
    Shi, Yunhui
    Yin, Baocai
    Ling, Nam
    MULTIMEDIA SYSTEMS, 2025, 31 (02)
  • [33] Super-resolution multi-contrast unbiased eye atlases with deep probabilistic refinement
    Lee, Ho Hin
    Saunders, Adam M.
    Kim, Michael E.
    Remedios, Samuel W.
    Remedios, Lucas W.
    Tang, Yucheng
    Yang, Qi
    Yu, Xin
    Bao, Shunxing
    Cho, Chloe
    Mawn, Louise A.
    Rex, Tonia S.
    Schey, Kevin L.
    Dewey, Blake E.
    Spraggins, Jeffrey M.
    Prince, Jerry L.
    Huo, Yuankai
    Landman, Bennett A.
    JOURNAL OF MEDICAL IMAGING, 2024, 11 (06)
  • [34] Cross-contrast mutual fusion network for joint MRI reconstruction and super-resolution
    Ding, Yue
    Zhou, Tao
    Xiang, Lei
    Wu, Ye
    PATTERN RECOGNITION, 2024, 154
  • [35] DASR: Dual-Attention Transformer for infrared image super-resolution
    Liang, Shubo
    Song, Kechen
    Zhao, Wenli
    Li, Song
    Yan, Yunhui
    INFRARED PHYSICS & TECHNOLOGY, 2023, 133
  • [36] Cross-attention interaction learning network for multi-model image fusion via transformer
    Wang, Jing
    Yu, Long
    Tian, Shengwei
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 139
  • [37] MHAN: Multi-Stage Hybrid Attention Network for MRI reconstruction and super-resolution
    Wang, Wanliang
    Shen, Haoxin
    Chen, Jiacheng
    Xing, Fangsen
    COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 163
  • [38] Super-Resolution Swin Transformer and Attention Network for Medical CT Imaging
    Hu, Jianhua
    Zheng, Shuzhao
    Wang, Bo
    Luo, Guixiang
    Huang, WoQing
    Zhang, Jun
    BIOMED RESEARCH INTERNATIONAL, 2022, 2022
  • [39] Cross-resolution feature attention network for image super-resolution
    Liu, Anqi
    Li, Sumei
    Chang, Yongli
    VISUAL COMPUTER, 2023, 39 (09): : 3837 - 3849
  • [40] Cross-resolution feature attention network for image super-resolution
    Anqi Liu
    Sumei Li
    Yongli Chang
    The Visual Computer, 2023, 39 : 3837 - 3849