Ultrasound prostate segmentation based on multidirectional deeply supervised V-Net

被引:140
|
作者
Lei, Yang [1 ,2 ]
Tian, Sibo [1 ,2 ]
He, Xiuxiu [1 ,2 ]
Wang, Tonghe [1 ,2 ]
Wang, Bo [1 ,2 ]
Patel, Pretesh [1 ,2 ]
Jani, Ashesh B. [1 ,2 ]
Mao, Hui [2 ,3 ]
Curran, Walter J. [1 ,2 ]
Liu, Tian [1 ,2 ]
Yang, Xiaofeng [1 ,2 ]
机构
[1] Emory Univ, Dept Radiat Oncol, Atlanta, GA 30322 USA
[2] Emory Univ, Winship Canc Inst, Atlanta, GA 30322 USA
[3] Emory Univ, Dept Radiol & Imaging Sci, Atlanta, GA 30322 USA
基金
美国国家卫生研究院;
关键词
deeply supervised network; deep learning; prostate segmentation; transrectal ultrasound (TRUS); AUTOMATED SEGMENTATION; IMAGES;
D O I
10.1002/mp.13577
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Purpose Transrectal ultrasound (TRUS) is a versatile and real-time imaging modality that is commonly used in image-guided prostate cancer interventions (e.g., biopsy and brachytherapy). Accurate segmentation of the prostate is key to biopsy needle placement, brachytherapy treatment planning, and motion management. Manual segmentation during these interventions is time-consuming and subject to inter- and intraobserver variation. To address these drawbacks, we aimed to develop a deep learning-based method which integrates deep supervision into a three-dimensional (3D) patch-based V-Net for prostate segmentation. Methods and materials We developed a multidirectional deep-learning-based method to automatically segment the prostate for ultrasound-guided radiation therapy. A 3D supervision mechanism is integrated into the V-Net stages to deal with the optimization difficulties when training a deep network with limited training data. We combine a binary cross-entropy (BCE) loss and a batch-based Dice loss into the stage-wise hybrid loss function for a deep supervision training. During the segmentation stage, the patches are extracted from the newly acquired ultrasound image as the input of the well-trained network and the well-trained network adaptively labels the prostate tissue. The final segmented prostate volume is reconstructed using patch fusion and further refined through a contour refinement processing. Results Forty-four patients' TRUS images were used to test our segmentation method. Our segmentation results were compared with the manually segmented contours (ground truth). The mean prostate volume Dice similarity coefficient (DSC), Hausdorff distance (HD), mean surface distance (MSD), and residual mean surface distance (RMSD) were 0.92 +/- 0.03, 3.94 +/- 1.55, 0.60 +/- 0.23, and 0.90 +/- 0.38 mm, respectively. Conclusion We developed a novel deeply supervised deep learning-based approach with reliable contour refinement to automatically segment the TRUS prostate, demonstrated its clinical feasibility, and validated its accuracy compared to manual segmentation. The proposed technique could be a useful tool for diagnostic and therapeutic applications in prostate cancer.
引用
收藏
页码:3194 / 3206
页数:13
相关论文
共 50 条
  • [1] 3D Ultrasound Prostate Segmentation Using 3D Deeply Supervised V-Net
    Yang, X.
    Lei, Y.
    Tian, S.
    Wang, T.
    Jani, A.
    Curran, W.
    Patel, P.
    Liu, T.
    [J]. MEDICAL PHYSICS, 2018, 45 (06) : E473 - E473
  • [2] Ultrasound Prostate Segmentation Based on 3D V-Net with Deep Supervision
    Lei, Yang
    Wang, Tonghe
    Wang, Bo
    He, Xiuxiu
    Tian, Sibo
    Jani, Ashesh B.
    Mao, Hui
    Curran, Walter J.
    Patel, Pretesh
    Liu, Tian
    Yang, Xiaofeng
    [J]. MEDICAL IMAGING 2019: ULTRASONIC IMAGING AND TOMOGRAPHY, 2019, 10955
  • [3] Fetal Ultrasound Image Segmentation for Automatic Head Circumference Biometry Using Deeply Supervised Attention-Gated V-Net
    Zeng, Yan
    Tsui, Po-Hsiang
    Wu, Weiwei
    Zhou, Zhuhuang
    Wu, Shuicai
    [J]. JOURNAL OF DIGITAL IMAGING, 2021, 34 (01) : 134 - 148
  • [4] Fetal Ultrasound Image Segmentation for Automatic Head Circumference Biometry Using Deeply Supervised Attention-Gated V-Net
    Yan Zeng
    Po-Hsiang Tsui
    Weiwei Wu
    Zhuhuang Zhou
    Shuicai Wu
    [J]. Journal of Digital Imaging, 2021, 34 : 134 - 148
  • [5] Segmentation of prostate region in magnetic resonance images based on improved V-Net
    Gao, Mingyuan
    Yan, Shiju
    Song, Chengli
    Zhu, Zehua
    Xie, Erze
    Fang, Boya
    [J]. Shengwu Yixue Gongchengxue Zazhi/Journal of Biomedical Engineering, 2023, 40 (02): : 226 - 233
  • [6] FOE-NET: Segmentation of Fetal in Ultrasound Images using V-NET
    Pregitha, Eveline R.
    Kumar, Vinod R. S.
    Selvakumar, Ebbie C.
    [J]. INTERNATIONAL JOURNAL OF ELECTRICAL AND COMPUTER ENGINEERING SYSTEMS, 2023, 14 (10) : 1141 - 1149
  • [7] Attention V-Net: A Modified V-Net Architecture for Left Atrial Segmentation
    Liu, Xiaoli
    Yin, Ruoqi
    Yin, Jianqin
    [J]. APPLIED SCIENCES-BASEL, 2022, 12 (08):
  • [8] LIGHTWEIGHT V-NET FOR LIVER SEGMENTATION
    Lei, Tao
    Zhou, Wenzheng
    Zhang, Yuxiao
    Wang, Risheng
    Meng, Hongying
    Nandi, Asoke K.
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 1379 - 1383
  • [9] V-net Performances for 2D Ultrasound Image Segmentation
    Dangoury, Soufiane
    Sadik, Mohammed
    Alali, Abdelhakim
    Fail, Abderrahim
    [J]. 2022 IEEE 18TH INTERNATIONAL COLLOQUIUM ON SIGNAL PROCESSING & APPLICATIONS (CSPA 2022), 2022, : 96 - 100
  • [10] Prostate lesion segmentation in MR images using radiomics based deeply supervised U-Net
    Hambarde, Praful
    Talbar, Sanjay
    Mahajan, Abhishek
    Chavan, Satishkumar
    Thakur, Meenakshi
    Sable, Nilesh
    [J]. BIOCYBERNETICS AND BIOMEDICAL ENGINEERING, 2020, 40 (04) : 1421 - 1435