Exploring Separable Attention for Multi-Contrast MR Image Super-Resolution

被引:12
|
作者
Feng, Chun-Mei [1 ,2 ]
Yan, Yunlu [3 ]
Yu, Kai [1 ]
Xu, Yong [2 ]
Fu, Huazhu [1 ]
Yang, Jian [4 ,5 ]
Shao, Ling [6 ]
机构
[1] ASTAR, Inst High Performance Comp IHPC, Singapore 138632, Singapore
[2] Harbin Inst Technol Shenzhen, Shenzhen Key Lab Visual Object Detect & Recognit, Shenzhen 518055, Peoples R China
[3] Hong Kong Univ Sci & Technol Guangzhou, Guangzhou 511458, Peoples R China
[4] Nanjing Univ Sci & Technol, PCA Lab, Key Lab Intelligent Percept & Syst High Dimens Inf, Minist Educ, Nanjing 210094, Peoples R China
[5] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Jiangsu Key Lab Image & Video Understandingfor Soc, Nanjing 210094, Peoples R China
[6] Univ Chinese Acad Sci, UCAS Terminus AI Lab, Beijing 065001, Peoples R China
关键词
High-and low-intensity regions; magnetic resonance (MR) imaging; multi-contrast; super-resolution (SR); SPARSE REPRESENTATION; BRAIN MRI; NETWORK; SINGLE; RECONSTRUCTION; ALGORITHM;
D O I
10.1109/TNNLS.2023.3253557
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Super-resolving the magnetic resonance (MR) image of a target contrast under the guidance of the corresponding auxiliary contrast, which provides additional anatomical information, is a new and effective solution for fast MR imaging. However, current multi-contrast super-resolution (SR) methods tend to concatenate different contrasts directly, ignoring their relationships in different clues, e.g., in the high-and low-intensity regions. In this study, we propose a separable attention network (comprising high-intensity priority (HP) attention and low-intensity separation (LS) attention), named SANet. Our SANet could explore the areas of high-and low-intensity regions in the "forward" and "reverse" directions with the help of the auxiliary contrast while learning clearer anatomical structure and edge information for the SR of a target-contrast MR image. SANet provides three appealing benefits: First, it is the first model to explore a separable attention mechanism that uses the auxiliary contrast to predict the high-and low-intensity regions, diverting more attention to refining any uncertain details between these regions and correcting the fine areas in the reconstructed results. Second, a multistage integration module is proposed to learn the response of multi-contrast fusion at multiple stages, get the dependency between the fused representations, and boost their representation ability. Third, extensive experiments with various state-of-the-art multi-contrast SR methods on fastMRI and clinical in vivo datasets demonstrate the superiority of our model. The code is released at https://github.com/chunmeifeng/SANet.
引用
收藏
页码:12251 / 12262
页数:12
相关论文
共 50 条
  • [31] Multi-attention augmented network for single image super-resolution
    Chen, Rui
    Zhang, Heng
    Liu, Jixin
    PATTERN RECOGNITION, 2022, 122
  • [32] Multi-Grained Attention Networks for Single Image Super-Resolution
    Wu, Huapeng
    Zou, Zhengxia
    Gui, Jie
    Zeng, Wen-Jun
    Ye, Jieping
    Zhang, Jun
    Liu, Hongyi
    Wei, Zhihui
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (02) : 512 - 522
  • [33] Image super-resolution reconstruction with multi-scale attention fusion
    Chen, Chun-yi
    Wu, Xin-yi
    Hu, Xiao-juan
    Yu, Hai-yang
    CHINESE OPTICS, 2023, 16 (05) : 1034 - 1044
  • [34] TBNet: Stereo Image Super-Resolution with Multi-Scale Attention
    Zhu, Jiyang
    Han, Xue
    JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2023, 32 (18)
  • [35] DROPOUT MULTI-HEAD ATTENTION FOR SINGLE IMAGE SUPER-RESOLUTION
    Yang, Chao
    Fan, Yong
    Lu, Cheng
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 2655 - 2659
  • [36] Exploring the impact of super-resolution deep learning on MR angiography image quality
    Hokamura, Masamichi
    Uetani, Hiroyuki
    Nakaura, Takeshi
    Matsuo, Kensei
    Morita, Kosuke
    Nagayama, Yasunori
    Kidoh, Masafumi
    Yamashita, Yuichi
    Ueda, Mitsuharu
    Mukasa, Akitake
    Hirai, Toshinori
    NEURORADIOLOGY, 2024, 66 (02) : 217 - 226
  • [37] Exploring the impact of super-resolution deep learning on MR angiography image quality
    Masamichi Hokamura
    Hiroyuki Uetani
    Takeshi Nakaura
    Kensei Matsuo
    Kosuke Morita
    Yasunori Nagayama
    Masafumi Kidoh
    Yuichi Yamashita
    Mitsuharu Ueda
    Akitake Mukasa
    Toshinori Hirai
    Neuroradiology, 2024, 66 : 217 - 226
  • [38] Transformer-empowered Multi-scale Contextual Matching and Aggregation for Multi-contrast MRI Super-resolution
    Li, Guangyuan
    Lv, Jun
    Tian, Yapeng
    Dou, Qi
    Wang, Chengyan
    Xu, Chenliang
    Qin, Jing
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 20604 - 20613
  • [39] SGSR: Structure-Guided Multi-contrast MRI Super-Resolution via Spatio-Frequency Co-Query Attention
    Zheng, Shaoming
    Wang, Yinsong
    Du, Siyi
    Qin, Chen
    MACHINE LEARNING IN MEDICAL IMAGING, PT I, MLMI 2024, 2025, 15241 : 382 - 391
  • [40] Rethinking Multi-Contrast MRI Super-Resolution: Rectangle-Window Cross-Attention Transformer and Arbitrary-Scale Upsampling
    Li, Guangyuan
    Zhao, Lei
    Sun, Jiakai
    Lan, Zehua
    Zhang, Zhanjie
    Chen, Jiafu
    Lin, Zhijie
    Lin, Huaizhong
    Xing, Wei
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 21173 - 21183