Multi-label semantic segmentation of magnetic resonance images of the prostate gland

被引:0
|
作者
Locherer, Mark [1 ]
Bonenberger, Christopher [1 ]
Ertel, Wolfgang [1 ]
Hadaschik, Boris [2 ]
Stumm, Kristina [2 ]
Schneider, Markus [1 ]
Radtke, Jan Philipp [2 ,3 ,4 ]
机构
[1] Institute for Artificial Intelligence, University of Applied Sciences Ravensburg-Weingarten (RWU), P.O. Box 30 22, Weingarten,88216, Germany
[2] Department of Urology, University Hospital Essen, Hufelandstraße 55, Essen,45147, Germany
[3] Department of Urology, Medical Faculty, Heinrich Heine-University Düsseldorf, Moorenstr. 5, Düsseldorf,40225, Germany
[4] Department of Radiology, German Cancer Research Center, Im Neuenheimer Feld 280, Heidelberg,69120, Germany
来源
Discover Artificial Intelligence | 2024年 / 4卷 / 01期
关键词
Benchmarking - Chemical shift - Deep learning - Image enhancement - Labeled data - Large datasets - Oncology - Pipeline processing systems - Urology;
D O I
10.1007/s44163-024-00162-z
中图分类号
学科分类号
摘要
Prostate segmentation is a substantial factor in the diagnostic pathway of suspicious prostate lesions. Medical doctors are assisted by computer-aided detection and diagnosis systems systems and methods derived from artificial intelligence deep learning-based systems have to be trained on existing data. Especially freely available labeled prostate magnetic resonance imaging data is rare. With regard to this problem, we show a method to combine two existing small datasets to form a bigger one. We present a data processing pipeline consisting of a cascaded network architecture that is able to perform multi-label semantic segmentation, hence, capable of classifying each pixel in a T2-weighted image not only to one class but to a subset of the prostate zones and classes of interest. This delivers richer information such as overlapping zones that are key to medical radiological examination. Additionally, we describe how to integrate expert knowledge in our deep learning system. To increase data variety for training and evaluation we use image augmentation on our two datasets—a freely available dataset and our new open-source dataset. To combine the datasets we denoise the contourings in our dataset by using an effective yet simple algorithm based on standard computer vision methods only. The performance of the presented methodology is compared and evaluated using the dice score metric and 5-fold cross-validation on all datasets. Although we trained on tiny datasets our method achieves excellent segmentation quality and is even able to detect prostate cancer. Our method to combine the two datasets reduces segmentation errors and increases data variety. The proposed architecture significantly improves performance by including expert knowledge via feature-map concatenation. On the initiative for collaborative computer vision benchmarking dataset we achieve on average dice scores of approximately 91% for the whole prostate gland, 67% for the peripheral zone and 75% for the prostate central gland. We find that image augmentation except contrast limited adaptive histogram equalisation did not have much influence on the segmentation quality. Derived and enhanced from existing methods we present an approach that is able to deliver multi-label semantic segmentation results for prostate magnetic resonance imaging. This approach is simple and could be applied to other applications of deep learning as well. It improves the segmentation results by a large margin. Once tweaked to the data, our denoising and combination algorithm delivers robust and accurate results even on data with segmentation errors. © The Author(s) 2024.
引用
收藏
相关论文
共 50 条
  • [1] Residual Semantic Segmentation of the Prostate from Magnetic Resonance Images
    Hossain, Md Sazzad
    Paplinski, Andrew P.
    Betts, John M.
    NEURAL INFORMATION PROCESSING (ICONIP 2018), PT VII, 2018, 11307 : 510 - 521
  • [2] 2D Semantic Segmentation of the Prostate Gland in Magnetic Resonance Images using Convolutional Neural Networks
    Vacacela, Silvia P.
    Benalcazar, Marco E.
    IFAC PAPERSONLINE, 2021, 54 (15): : 394 - 399
  • [3] Learning-Based Multi-Label Segmentation of Transrectal Ultrasound Images for Prostate Brachytherapy
    Nouranian, Saman
    Ramezani, Mahdi
    Spadinger, Ingrid
    Morris, William J.
    Salcudean, Septimu E.
    Abolmaesumi, Purang
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2016, 35 (03) : 921 - 932
  • [4] JMLNet: Joint Multi-Label Learning Network for Weakly Supervised Semantic Segmentation in Aerial Images
    Guo, Rongxin
    Sun, Xian
    Chen, Kaiqiang
    Zhou, Xiao
    Yan, Zhiyuan
    Diao, Wenhui
    Yan, Menglong
    REMOTE SENSING, 2020, 12 (19) : 1 - 18
  • [5] Automatic Design of Window Operators for the Segmentation of the Prostate Gland in Magnetic Resonance Images
    Benalcazar, Marco E.
    Brun, Marcel
    Ballarin, Virginia
    VI LATIN AMERICAN CONGRESS ON BIOMEDICAL ENGINEERING (CLAIB 2014), 2014, 49 : 417 - 420
  • [6] Matwo-CapsNet: A Multi-label Semantic Segmentation Capsules Network
    Bonheur, Savinien
    Stern, Darko
    Payer, Christian
    Pienn, Michael
    Olschewski, Horst
    Urschler, Martin
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2019, PT V, 2019, 11768 : 664 - 672
  • [7] Interactive Multi-label Segmentation
    Santner, Jakob
    Pock, Thomas
    Bischof, Horst
    COMPUTER VISION-ACCV 2010, PT I, 2011, 6492 : 397 - 410
  • [8] Midrange Geometric Interactions for Semantic Segmentation Constraints for Continuous Multi-label Optimization
    Diebold, Julia
    Nieuwenhuis, Claudia
    Cremers, Daniel
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2016, 117 (03) : 199 - 225
  • [9] Multi-modal Multi-label Semantic Indexing of Images using Unlabeled Data
    Li, Wei
    Sun, Maosong
    ALPIT 2008: SEVENTH INTERNATIONAL CONFERENCE ON ADVANCED LANGUAGE PROCESSING AND WEB INFORMATION TECHNOLOGY, PROCEEDINGS, 2008, : 204 - 209
  • [10] Learning Multi-level Region Consistency with Dense Multi-label Networks for Semantic Segmentation
    Shen, Tong
    Lin, Guosheng
    Shen, Chunhua
    Reid, Ian
    PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 2708 - 2714