Automatic Segmentation and Alignment of Uterine Shapes from 3D Ultrasound Data

被引:0
|
作者
Boneš E. [1 ]
Gergolet M. [2 ]
Bohak C. [1 ,3 ]
Lesar Ž. [1 ]
Marolt M. [1 ]
机构
[1] University of Ljubljana, Faculty of Computer and Information Science, Večna pot 113, Ljubljana
[2] University of Ljubljana, Faculty of Medicine, Vrazov trg 2, Ljubljana
[3] King Abdullah University of Science and Technology, Visual Computing Center, Thuwal
关键词
3D alignment; Uterus segmentation; Volumetric ultrasound;
D O I
10.1016/j.compbiomed.2024.108794
中图分类号
学科分类号
摘要
Background: The uterus is the most important organ in the female reproductive system. Its shape plays a critical role in fertility and pregnancy outcomes. Advances in medical imaging, such as 3D ultrasound, have significantly improved the exploration of the female genital tract, thereby enhancing gynecological healthcare. Despite well-documented data for organs like the liver and heart, large-scale studies on the uterus are lacking. Existing classifications, such as VCUAM and ESHRE/ESGE, provide different definitions for normal uterine shapes but are not based on real-world measurements. Moreover, the lack of comprehensive datasets significantly hinders research in this area. Our research, part of the larger NURSE study, aims to fill this gap by establishing the shape of a normal uterus using real-world 3D vaginal ultrasound scans. This will facilitate research into uterine shape abnormalities associated with infertility and recurrent miscarriages. Methods: We developed an automated system for the segmentation and alignment of uterine shapes from 3D ultrasound data, which consists of two steps: automatic segmentation of the uteri in 3D ultrasound scans using deep learning techniques, and alignment of the resulting shapes with standard geometrical approaches, enabling the extraction of the normal shape for future analysis. The system was trained and validated on a comprehensive dataset of 3D ultrasound images from multiple medical centers. Its performance was evaluated by comparing the automated results with manual annotations provided by expert clinicians. Results: The presented approach demonstrated high accuracy in segmenting and aligning uterine shapes from 3D ultrasound data. The segmentation achieved an average Dice similarity coefficient (DSC) of 0.90. Our method for aligning uterine shapes showed minimal translation and rotation errors compared to traditional methods, with the preliminary average shape exhibiting characteristics consistent with expert findings of a normal uterus. Conclusion: We have presented an approach to automatically segment and align uterine shapes from 3D ultrasound data. We trained a deep learning nnU-Net model that achieved high accuracy and proposed an alignment method using a combination of standard geometrical techniques. Additionally, we have created a publicly available dataset of 3D transvaginal ultrasound volumes with manual annotations of uterine cavities to support further research and development in this field. The dataset and the trained models are available at https://github.com/UL-FRI-LGM/UterUS. © 2024 The Author(s)
引用
收藏
相关论文
共 50 条
  • [31] Automatic Alignment of Mixed-Resolution 3D Point Cloud Data
    Watson, Thomas Pascarella
    Wang, Lan
    Jacobs, Eddie L.
    LASER RADAR TECHNOLOGY AND APPLICATIONS XXVII, 2022, 12110
  • [32] 3D ultrasound to diagnose uterine anomalies
    Exacoustos, C.
    Valeria, R.
    Cobuzzi, I.
    Zizolfi, B.
    Di Spiezio, A.
    Zupi, E.
    HUMAN REPRODUCTION, 2016, 31 : 60 - 60
  • [33] 3d ultrasound in the evaluation of the uterine cavity
    Momtaz, M.
    Shawki, O.
    Ebrashi, A.
    Younis, A.
    Hasan, M.
    PROCEEDINGS OF THE 1ST INTERNATIONAL SYMPOSIUM IN HOT TOPIC & CONTROVERSIES IN PERINATAL MEDICINE/2ND CONGRESS OF THE MEDITERRANEAN ASSOCIATION FOR ULTRASOUND IN OBSTETRICS & GYNECOLOGY, 2005, : 9 - 15
  • [34] Deep learning enables automatic 3D segmentation of the puborectalis muscle on transperineal ultrasound
    van den Noort, F.
    Sirmacek, B.
    van der Vaart, C.
    Slump, C.
    INTERNATIONAL UROGYNECOLOGY JOURNAL, 2020, 31 (SUPPL 1) : S49 - S50
  • [35] 3D statistical shape models for automatic segmentation of the fetal cerebellum in ultrasound images
    Velasquez-Rodriguez, Gustavo A. R.
    Fanti-Gutierrez, Zian
    Torres, Fabian
    Medina-Banuelos, Veronica
    Escalante-Ramirez, Boris
    Marin, Lisbeth Camargo
    Huerta, Mario Guzman
    Cosio, Fernando Arambula
    SIGNAL IMAGE AND VIDEO PROCESSING, 2025, 19 (01)
  • [36] Automatic 3D segmentation of intravascular ultrasound images using region and contour information
    Cardinal, MHR
    Meunier, J
    Soulez, G
    Maurice, RL
    Thérasse, T
    Cloutier, G
    MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION - MICCAI 2005, PT 1, 2005, 3749 : 319 - 326
  • [37] AUTOMATIC 3D ULTRASOUND SEGMENTATION OF THE FIRST TRIMESTER PLACENTA USING DEEP LEARNING
    Looney, Padraig
    Stevenson, Gordon N.
    Nicolaides, Kypros H.
    Plasencia, Walter
    Molloholli, Malid
    Natsis, Stavros
    Collins, Sally L.
    2017 IEEE 14TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2017), 2017, : 279 - 282
  • [38] Automatic Segmentation of 3D Ultrasound Spine Curvature Using Convolutional Neural Network
    Banerjee, Sunetra
    Ling, Sai Ho
    Lyu, Juan
    Su, Steven
    Zheng, Yong-Ping
    42ND ANNUAL INTERNATIONAL CONFERENCES OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY: ENABLING INNOVATIVE TECHNOLOGIES FOR GLOBAL HEALTHCARE EMBC'20, 2020, : 2039 - 2042
  • [39] Renal Segmentation From 3D Ultrasound via Fuzzy Appearance Models and Patient-Specific Alpha Shapes
    Cerrolaza, Juan J.
    Safdar, Nabile
    Biggs, Elijah
    Jago, James
    Peters, Craig A.
    Linguraru, Marius George
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2016, 35 (11) : 2393 - 2402
  • [40] AUTOMATIC SEGMENTATION FOR 3D DENTAL RECONSTRUCTION
    Pavaloiu, Ionel-Bujorel
    Goga, Nicolae
    Marin, Iuliana
    Vasilateanu, Andrei
    2015 6TH INTERNATIONAL CONFERENCE ON COMPUTING, COMMUNICATION AND NETWORKING TECHNOLOGIES (ICCCNT), 2015, : 216 - 221