Joint image and feature adaptative attention-aware networks for cross-modality semantic segmentation

被引:3
|
作者
Zhong, Qihuang [1 ,2 ,3 ]
Zeng, Fanzhou [1 ]
Liao, Fei [1 ]
Liu, Juhua [2 ,3 ]
Du, Bo [3 ,4 ,5 ]
Shang, Jedi S. [6 ]
机构
[1] Wuhan Univ, Renmin Hosp, Dept Gastroenterol, Wuhan, Peoples R China
[2] Wuhan Univ, Sch Printing & Packaging, Wuhan, Peoples R China
[3] Wuhan Univ, Inst Artificial Intelligence, Natl Engn Res Ctr Multimedia Software, Wuhan, Peoples R China
[4] Wuhan Univ, Sch Comp Sci, Wuhan, Peoples R China
[5] Wuhan Univ, Hubei Key Lab Multimedia & Network Commun Engn, Wuhan, Peoples R China
[6] Thinvent Technol Co LTD, Nanchang, Jiangxi, Peoples R China
来源
NEURAL COMPUTING & APPLICATIONS | 2023年 / 35卷 / 05期
基金
中国国家自然科学基金;
关键词
Domain adaptation; Attention; Cross-modality; Semantic segmentation; AUTOMATED SEGMENTATION; PATCH;
D O I
10.1007/s00521-021-06064-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning-based methods are widely used for the task of semantic segmentation in recent years. However, due to the difficulty and labor cost of collecting pixel-level annotations, it is hard to acquire sufficient training images for a certain imaging modality, which greatly hinders the performance of these methods. The intuitive solution to this issue is to train a pre-trained model on label-rich imaging modality (source domain) and then apply the pre-trained model to the label-poor imaging modality (target domain). Unsurprisingly, since the severe domain shift between different modalities, the pre-trained model would perform poorly on the target imaging modality. To this end, we propose a novel unsupervised domain adaptation framework, called Joint Image and Feature Adaptive Attention-aware Networks (JIFAAN), to alleviate the domain shift for cross-modality semantic segmentation. The proposed framework mainly consists of two procedures. The first procedure is image adaptation, which transforms the source domain images into target-like images using the adversarial learning with cycle-consistency constraint. For further bridging the gap between transformed images and target domain images, the second procedure employs feature adaptation to extract the domain-invariant features and thus aligns the distribution in feature space. In particular, we introduce an attention module in the feature adaptation to focus on noteworthy regions and generate attention-aware results. Lastly, we combine two procedures in an end-to-end manner. Experiments on two cross-modality semantic segmentation datasets demonstrate the effectiveness of our proposed framework. Specifically, JIFAAN surpasses the cutting-edge domain adaptation methods and achieves the state-of-the-art performance.
引用
收藏
页码:3665 / 3676
页数:12
相关论文
共 50 条
  • [21] Disentangle domain features for cross-modality cardiac image segmentation
    Pei, Chenhao
    Wu, Fuping
    Huang, Liqin
    Zhuang, Xiahai
    MEDICAL IMAGE ANALYSIS, 2021, 71
  • [22] Review-Aware Neural Recommendation with Cross-Modality Mutual Attention
    Luo, Songyin
    Lu, Xiangkui
    Wu, Jun
    Yuan, Jianbo
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 3293 - 3297
  • [23] Unsupervised Bidirectional Cross-Modality Adaptation via Deeply Synergistic Image and Feature Alignment for Medical Image Segmentation
    Chen, Cheng
    Dou, Qi
    Chen, Hao
    Qin, Jing
    Heng, Pheng Ann
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2020, 39 (07) : 2494 - 2505
  • [24] AFINITI: attention-aware feature integration for nuclei instance segmentation and type identification
    Esha Sadia Nasir
    Shahzad Rasool
    Raheel Nawaz
    Muhammad Moazam Fraz
    Neural Computing and Applications, 2024, 36 (29) : 18343 - 18361
  • [25] Cross-modality image feature fusion diagnosis in breast cancer
    Jiang, Mingkuan
    Han, Lu
    Sun, Hang
    Li, Jing
    Bao, Nan
    Li, Hong
    Zhou, Shi
    Yu, Tao
    PHYSICS IN MEDICINE AND BIOLOGY, 2021, 66 (10):
  • [26] Local-to-Global Cross-Modal Attention-Aware Fusion for HSI-X Semantic Segmentation
    Zhang, Xuming
    Yokoya, Naoto
    Gu, Xingfa
    Tian, Qingjiu
    Bruzzone, Lorenzo
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62
  • [27] TRAINING CROSS-MODALITY CEREBROVASCULAR SEGMENTATION NETWORKS WITH PAIRED IMAGES
    Guo, Zhanqiang
    Feng, Jianjiang
    Lu, Wangsheng
    Yin, Yin
    Yang, Guangming
    Zhou, Jie
    2023 IEEE 20TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI, 2023,
  • [28] Confidence-weighted mutual supervision on dual networks for unsupervised cross-modality image segmentation
    Yajie Chen
    Xin Yang
    Xiang Bai
    Science China Information Sciences, 2023, 66
  • [29] Confidence-weighted mutual supervision on dual networks for unsupervised cross-modality image segmentation
    Yajie CHEN
    Xin YANG
    Xiang BAI
    Science China(Information Sciences), 2023, 66 (11) : 54 - 68
  • [30] Confidence-weighted mutual supervision on dual networks for unsupervised cross-modality image segmentation
    Chen, Yajie
    Yang, Xin
    Bai, Xiang
    SCIENCE CHINA-INFORMATION SCIENCES, 2023, 66 (11)