Dual Encoder Attention U-net for nuclei segmentation

被引:16
|
作者
Vahadane, Abhishek [1 ]
Atheeth, B. [1 ]
Majumdar, Shantanu [1 ]
机构
[1] Rakuten Inc, Rakuten Inst Technol India, Tokyo, Japan
来源
2021 43RD ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY (EMBC) | 2021年
关键词
Dual Encoder; Nuclei segmentation; Attention;
D O I
10.1109/EMBC46164.2021.9630037
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Nuclei segmentation in whole slide images (WSIs) stained with Hematoxylin and Eosin (H&E) dye, is a key step in computational pathology which aims to automate the laborious process of manual counting and segmentation. Nuclei segmentation is a challenging problem that involves challenges such as touching nuclei resolution, small-sized nuclei, size, and shape variations. With the advent of deep learning, convolution neural networks (CNNs) have shown a powerful ability to extract effective representations from microscopic H&E images. We propose a novel dual encoder Attention U-net (DEAU) deep learning architecture and pseudo hard attention gating mechanism, to enhance the attention to target instances. We added a new secondary encoder to the attention U-net to capture the best attention for a given input. Since H captures nuclei information, we propose a stain-separated H channel as input to the secondary encoder. The role of the secondary encoder is to transform attention prior to different spatial resolutions while learning significant attention information. The proposed DEAU performance was evaluated on three publicly available H&E data sets for nuclei segmentation from different research groups. Experimental results show that our approach outperforms other attention-based approaches for nuclei segmentation.
引用
收藏
页码:3205 / 3208
页数:4
相关论文
共 50 条
  • [21] Synergistic attention U-Net for sublingual vein segmentation
    Yang, Tingxiao
    Yoshimura, Yuichiro
    Morita, Akira
    Namiki, Takao
    Nakaguchi, Toshiya
    ARTIFICIAL LIFE AND ROBOTICS, 2019, 24 (04) : 550 - 559
  • [22] AttU-NET: Attention U-Net for Brain Tumor Segmentation
    Wang, Sihan
    Li, Lei
    Zhuang, Xiahai
    BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES, BRAINLES 2021, PT II, 2022, 12963 : 302 - 311
  • [23] Recurrent Residual U-Net with EfficientNet Encoder for Medical Image Segmentation
    Siddique, Nahian
    Paheding, Sidike
    Alom, Md Zahangir
    Devabhaktuni, Vijaya
    PATTERN RECOGNITION AND TRACKING XXXII, 2021, 11735
  • [24] Attention-augmented U-Net (AA-U-Net) for semantic segmentation
    Kumar T. Rajamani
    Priya Rani
    Hanna Siebert
    Rajkumar ElagiriRamalingam
    Mattias P. Heinrich
    Signal, Image and Video Processing, 2023, 17 : 981 - 989
  • [25] Attention-augmented U-Net (AA-U-Net) for semantic segmentation
    Rajamani, Kumar T.
    Rani, Priya
    Siebert, Hanna
    ElagiriRamalingam, Rajkumar
    Heinrich, Mattias P.
    SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (04) : 981 - 989
  • [26] Dual Encoding U-Net for Retinal Vessel Segmentation
    Wang, Bo
    Qiu, Shuang
    He, Huiguang
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2019, PT I, 2019, 11764 : 84 - 92
  • [27] An Improved U-Net Model for Simultaneous Nuclei Segmentation and Classification
    Liu, Taotao
    Zhang, Dongdong
    Wang, Hongcheng
    Qi, Xumai
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT VI, ICIC 2024, 2024, 14867 : 314 - 325
  • [28] MASK-RCNN AND U-NET ENSEMBLED FOR NUCLEI SEGMENTATION
    Vuola, Aarno Oskar
    Akram, Saad Ullah
    Kannala, Juho
    2019 IEEE 16TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2019), 2019, : 208 - 212
  • [29] Automatic skin lesion segmentation using attention residual U-Net with improved encoder-decoder architecture
    Kaur R.
    Kaur S.
    Multimedia Tools and Applications, 2025, 84 (8) : 4315 - 4341
  • [30] Cross-modality Multi-encoder Hybrid Attention U-Net for Lung Tumors Images Segmentation
    Zhou Tao
    Dong Yali
    Liu Shan
    Lu Hulling
    Ma Zongjun
    Hou Senbao
    Qiu Shi
    ACTA PHOTONICA SINICA, 2022, 51 (04) : 368 - 384