Ultrasound prostate segmentation based on multidirectional deeply supervised V-Net

被引:140
|
作者
Lei, Yang [1 ,2 ]
Tian, Sibo [1 ,2 ]
He, Xiuxiu [1 ,2 ]
Wang, Tonghe [1 ,2 ]
Wang, Bo [1 ,2 ]
Patel, Pretesh [1 ,2 ]
Jani, Ashesh B. [1 ,2 ]
Mao, Hui [2 ,3 ]
Curran, Walter J. [1 ,2 ]
Liu, Tian [1 ,2 ]
Yang, Xiaofeng [1 ,2 ]
机构
[1] Emory Univ, Dept Radiat Oncol, Atlanta, GA 30322 USA
[2] Emory Univ, Winship Canc Inst, Atlanta, GA 30322 USA
[3] Emory Univ, Dept Radiol & Imaging Sci, Atlanta, GA 30322 USA
基金
美国国家卫生研究院;
关键词
deeply supervised network; deep learning; prostate segmentation; transrectal ultrasound (TRUS); AUTOMATED SEGMENTATION; IMAGES;
D O I
10.1002/mp.13577
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Purpose Transrectal ultrasound (TRUS) is a versatile and real-time imaging modality that is commonly used in image-guided prostate cancer interventions (e.g., biopsy and brachytherapy). Accurate segmentation of the prostate is key to biopsy needle placement, brachytherapy treatment planning, and motion management. Manual segmentation during these interventions is time-consuming and subject to inter- and intraobserver variation. To address these drawbacks, we aimed to develop a deep learning-based method which integrates deep supervision into a three-dimensional (3D) patch-based V-Net for prostate segmentation. Methods and materials We developed a multidirectional deep-learning-based method to automatically segment the prostate for ultrasound-guided radiation therapy. A 3D supervision mechanism is integrated into the V-Net stages to deal with the optimization difficulties when training a deep network with limited training data. We combine a binary cross-entropy (BCE) loss and a batch-based Dice loss into the stage-wise hybrid loss function for a deep supervision training. During the segmentation stage, the patches are extracted from the newly acquired ultrasound image as the input of the well-trained network and the well-trained network adaptively labels the prostate tissue. The final segmented prostate volume is reconstructed using patch fusion and further refined through a contour refinement processing. Results Forty-four patients' TRUS images were used to test our segmentation method. Our segmentation results were compared with the manually segmented contours (ground truth). The mean prostate volume Dice similarity coefficient (DSC), Hausdorff distance (HD), mean surface distance (MSD), and residual mean surface distance (RMSD) were 0.92 +/- 0.03, 3.94 +/- 1.55, 0.60 +/- 0.23, and 0.90 +/- 0.38 mm, respectively. Conclusion We developed a novel deeply supervised deep learning-based approach with reliable contour refinement to automatically segment the TRUS prostate, demonstrated its clinical feasibility, and validated its accuracy compared to manual segmentation. The proposed technique could be a useful tool for diagnostic and therapeutic applications in prostate cancer.
引用
收藏
页码:3194 / 3206
页数:13
相关论文
共 50 条
  • [41] Segmentation of anatomical structures in cardiac CTA using multi-label V-Net
    Tang, Hui
    Moradi, Mehdi
    El Harouni, Ahmed
    Wang, Hongzhi
    Veni, Gopalkrishna
    Prasanna, Prasanth
    Syeda-Mahmood, Tanveer
    [J]. MEDICAL IMAGING 2018: IMAGE PROCESSING, 2018, 10574
  • [42] Brain Tumour Segmentation on 3D MRI Using Attention V-Net
    Giri, Charul
    Sharma, Jivitesh
    Goodwin, Morten
    [J]. ENGINEERING APPLICATIONS OF NEURAL NETWORKS, EAAAI/EANN 2022, 2022, 1600 : 336 - 348
  • [43] Block Level Skip Connections Across Cascaded V-Net for Multi-Organ Segmentation
    Zhang, Liang
    Zhang, Jiaming
    Shen, Peiyi
    Zhu, Guangming
    Li, Ping
    Lu, Xiaoyuan
    Zhang, Huan
    Shah, Syed Afaq
    Bennamoun, Mohammed
    [J]. IEEE TRANSACTIONS ON MEDICAL IMAGING, 2020, 39 (09) : 2782 - 2793
  • [44] Automated Segmentation of Retinal Edema Lesions from OCT Images Using Improved V-Net
    Chen, Xinjian
    Feng, Shuanglang
    Zhu, Weifang
    Ma, Yuhui
    Cheng, Xuena
    Shi, Fei
    [J]. INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2019, 60 (09)
  • [45] A Hierarchical Graph V-Net With Semi-Supervised Pre-Training for Histological Image Based Breast Cancer Classification
    Li, Yonghao
    Shen, Yiqing
    Zhang, Jiadong
    Song, Shujie
    Li, Zhenhui
    Ke, Jing
    Shen, Dinggang
    [J]. IEEE TRANSACTIONS ON MEDICAL IMAGING, 2023, 42 (12) : 3907 - 3918
  • [46] Towards Cascaded V-Net for Automatic Accurate Kidney Segmentation from Abdominal CT Volumes
    Luo, Xiongbiao
    Zeng, Wankang
    Fan, Wenkang
    Zheng, Song
    Chen, Jianhui
    Liu, Rong
    Liu, Zengqin
    Chen, Yinran
    [J]. MEDICAL IMAGING 2021: IMAGE PROCESSING, 2021, 11596
  • [47] Large-scale Evaluation of V-Net for Organ Segmentation in Image Guided Radiation Therapy
    Han, Miaofei
    Zhang, Yu
    Zhou, Qiangqiang
    Rong, Chengcheng
    Zhan, Yiqiang
    Zhou, Xiang
    Gao, Yaozong
    [J]. MEDICAL IMAGING 2019: IMAGE-GUIDED PROCEDURES, ROBOTIC INTERVENTIONS, AND MODELING, 2019, 10951
  • [48] Advancing Human Action Recognition and Medical Image Segmentation using GRU Networks with V-Net Architecture
    Rao, Dustakar Surendra
    Rao, L. Koteswara
    Bhagyaraju, Vipparthi
    Rohini, P.
    [J]. INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2024, 15 (02) : 743 - 756
  • [49] 3-D Cell Segmentation by Improved V-Net Architecture Using Edge and Boundary Labels
    Chang, Chieh-Sheng
    Ding, Jian-Jiun
    Chen, Ping-Hung
    Wu, Yueh-Feng
    Lin, Sung-Jan
    [J]. 2019 2ND IEEE INTERNATIONAL CONFERENCE ON INFORMATION COMMUNICATION AND SIGNAL PROCESSING (ICICSP), 2019, : 435 - 439
  • [50] Multi-Scale V-Net: A Deep Learning Framework for Brain Tumor Segmentation in Multiparametric MRI
    Liu, Y.
    Shi, X.
    Xia, Y.
    Lei, Y.
    Tang, L.
    Liu, T.
    Curran, W.
    Mao, H.
    Yang, X.
    [J]. MEDICAL PHYSICS, 2018, 45 (06) : E568 - E568