Uncertainty-Guided Lung Nodule Segmentation with Feature-Aware Attention

被引:9
|
作者
Yang, Han [3 ]
Shen, Lu [3 ]
Zhang, Mengke [3 ]
Wang, Qiuli [1 ,2 ]
机构
[1] Univ Sci & Technol China, Sch Biomed Engn, Ctr Med Imaging Robot Analyt Comp & Learning MIRA, Suzhou, Peoples R China
[2] Univ Sci & Technol China, Suzhou Inst Adv Res, Suzhou, Peoples R China
[3] Chongqing Univ, Sch Big Data & Software Engn, Chongqing, Peoples R China
关键词
Lung nodule; Segmentation; Uncertainty; Attention mechanism; Computed tomography;
D O I
10.1007/978-3-031-16443-9_5
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Since radiologists have different training and clinical experiences, they may provide various segmentation annotations for a lung nodule. Conventional studies choose a single annotation as the learning target by default, but they waste valuable information of consensus or disagreements ingrained in the multiple annotations. This paper proposes an Uncertainty-Guided Segmentation Network (UGS-Net), which learns the rich visual features from the regions that may cause segmentation uncertainty and contributes to a better segmentation result. With an Uncertainty-Aware Module, this network can provide a Multi-Confidence Mask (MCM), pointing out regions with different segmentation uncertainty levels. Moreover, this paper introduces a Feature-Aware Attention Module to enhance the learning of the nodule boundary and density differences. Experimental results show that our method can predict the nodule regions with different uncertainty levels and achieve superior performance in the LIDC-IDRI dataset.
引用
收藏
页码:44 / 54
页数:11
相关论文
共 50 条
  • [31] Uncertainty-guided dual-views for semi-supervised volumetric medical image segmentation
    Peiris, Himashi
    Hayat, Munawar
    Chen, Zhaolin
    Egan, Gary
    Harandi, Mehrtash
    [J]. NATURE MACHINE INTELLIGENCE, 2023, 5 (07) : 724 - +
  • [32] DFDM: A DEEP FEATURE DECOUPLING MODULE FOR LUNG NODULE SEGMENTATION
    Chen, Wei
    Wang, Qiuli
    Huang, Sheng
    Zhang, Xiaohong
    Li, Yucong
    Liu, Chen
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 1120 - 1124
  • [33] Uncertainty quantification and attention-aware fusion guided multi-modal MR brain tumor segmentation
    Zhou, Tongxue
    Zhu, Shan
    [J]. COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 163
  • [34] Uncertainty-guided dual-views for semi-supervised volumetric medical image segmentation
    Himashi Peiris
    Munawar Hayat
    Zhaolin Chen
    Gary Egan
    Mehrtash Harandi
    [J]. Nature Machine Intelligence, 2023, 5 : 724 - 738
  • [35] Residual attention based uncertainty-guided mean teacher model for semi-supervised breast masses segmentation in 2D ultrasonography
    Farooq, Muhammad Umar
    Ullah, Zahid
    Gwak, Jeonghwan
    [J]. COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, 2023, 104
  • [36] Semi-supervised image semantic segmentation method with semantic regions patching and uncertainty-guided loss
    Guo, Dinghao
    Chen, Dali
    Lin, Xin
    Xue, Zheng
    Zheng, Wei
    Li, Xianling
    [J]. Visual Computer, 2025, 41 (05): : 3611 - 3626
  • [37] Feature-guided attention network for medical image segmentation
    Zhou, Hao
    Sun, Chaoyu
    Huang, Hai
    Fan, Mingyu
    Yang, Xu
    Zhou, Linxiao
    [J]. MEDICAL PHYSICS, 2023, 50 (08) : 4871 - 4886
  • [38] KNOWLEDGE-GUIDED AND HYPER-ATTENTION AWARE JOINT NETWORK FOR BENIGN-MALIGNANT LUNG NODULE CLASSIFICATION
    Xu, Weixin
    Wang, Kun
    Lin, Jingkai
    Lu, Yuting
    Huang, Sheng
    Zhang, Xiaohong
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 310 - 314
  • [39] Uncertainty-Guided Self-learning Framework for Semi-supervised Multi-organ Segmentation
    Alves, Natalia
    de Wilde, Bram
    [J]. FAST AND LOW-RESOURCE SEMI-SUPERVISED ABDOMINAL ORGAN SEGMENTATION, FLARE 2022, 2022, 13816 : 116 - 127
  • [40] CA-UNet: Convolution and attention fusion for lung nodule segmentation
    Wang, Tong
    Wu, Fubin
    Lu, Haoran
    Xu, Shengzhou
    [J]. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, 2023, 33 (05) : 1469 - 1479