Clinical workflow of sonographers performing fetal anomaly ultrasound scans: deep-learning-based analysis

被引:18
|
作者
Drukker, L. [1 ,2 ]
Sharma, H. [3 ]
Karim, J. N. [1 ]
Droste, R. [3 ]
Noble, J. A. [3 ]
Papageorghiou, A. T. [1 ,4 ]
机构
[1] Univ Oxford, John Radcliffe Hosp, Nuffield Dept Womens & Reprod Hlth, Oxford, Oxfordshire, England
[2] Tel Aviv Univ, Sackler Fac Med, Beilinson Med Ctr, Womens Ultrasound,Dept Obstet & Gynecol, Tel Aviv, Israel
[3] Univ Oxford, Inst Biomed Engn, Oxford, Oxfordshire, England
[4] Univ Oxford, John Radcliffe Hosp, Nuffield Dept Womens & Reprod Hlth, Oxford OX3 9DU, Oxfordshire, England
基金
欧洲研究理事会;
关键词
anatomy; artificial intelligence; automation; big data; clinical workflow; computer vision; data science; deep learning; image analysis; machine learning; neural network; obstetrics; pregnancy; screening; sonography; ultrasound; QUALITY IMPROVEMENT;
D O I
10.1002/uog.24975
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Objective Despite decades of obstetric scanning, the field of sonographer workflow remains largely unexplored. In the second trimester, sonographers use scan guidelines to guide their acquisition of standard planes and structures; however, the scan-acquisition order is not prescribed. Using deep-learning-based video analysis, the aim of this study was to develop a deeper understanding of the clinical workflow undertaken by sonographers during second-trimester anomaly scans.Methods We collected prospectively full-length video recordings of routine second-trimester anomaly scans. Important scan events in the videos were identified by detecting automatically image freeze and image/clip save. The video immediately preceding and following the important event was extracted and labeled as one of 11 commonly acquired anatomical structures. We developed and used a purposely trained and tested deep-learning annotation model to label automatically the large number of scan events. Thus, anomaly scans were partitioned as a sequence of anatomical planes or fetal structures obtained over time.Results A total of 496 anomaly scans performed by 14 sonographers were available for analysis. UK guidelines specify that an image or videoclip of five different anatomical regions must be stored and these were detected in the majority of scans: head/brain was detected in 97.2% of scans, coronal face view (nose/lips) in 86.1%, abdomen in 93.1%, spine in 95.0% and femur in 92.3%. Analyzing the clinical workflow, we observed that sonographers were most likely to begin their scan by capturing the head/brain (in 24.4% of scans), spine (in 23.2%) or thorax/heart (in 22.8%). The most commonly identified two-structure transitions were: placenta/amniotic fluid to maternal anatomy, occurring in 44.5% of scans; head/brain to coronal face (nose/lips) in 42.7%; abdomen to thorax/heart in 26.1%; and three-dimensional/four-dimensional face to sagittal face (profile) in 23.7%. Transitions between three or more consecutive structures in sequence were uncommon (up to 13% of scans). None of the captured anomaly scans shared an entirely identical sequence.Conclusions We present a novel evaluation of the anomaly scan acquisition process using a deep-learning-based analysis of ultrasound video. We note wide variation in the number and sequence of structures obtained during routine second-trimester anomaly scans. Overall, each anomaly scan was found to be unique in its scanning sequence, suggesting that sonographers take advantage of the fetal position and acquire the standard planes according to their visibility rather than following a strict acquisition order.
引用
收藏
页码:759 / 765
页数:7
相关论文
共 50 条
  • [21] Deep-learning-based anomaly detection for lace defect inspection employing videos in production line
    Lu, Bingyu
    Xu, Ding
    Huang, Biqing
    Advanced Engineering Informatics, 2022, 51
  • [22] Deep-learning-based anomaly detection for lace defect inspection employing videos in production line
    Lu, Bingyu
    Xu, Ding
    Huang, Biqing
    ADVANCED ENGINEERING INFORMATICS, 2022, 51
  • [23] Deep-learning-based method for the segmentation of ureter and renal pelvis on non-enhanced CT scans
    Jin, Xin
    Zhong, Hai
    Zhang, Yumeng
    Pang, Guo Dong
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [24] Deep Learning based Determination of Graf Standart Plane on Hip Ultrasound Scans
    Pelit, Baran
    Abay, Huseyin
    Akkas, Burhan Bilal
    Sezer, Aysun
    32ND IEEE SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE, SIU 2024, 2024,
  • [25] Usefulness of Heat Map Explanations for Deep-Learning-Based Electrocardiogram Analysis
    Storas, Andrea M.
    Andersen, Ole Emil
    Lockhart, Sam
    Thielemann, Roman
    Gnesin, Filip
    Thambawita, Vajira
    Hicks, Steven A.
    Kanters, Jorgen K.
    Struemke, Inga
    Halvorsen, Pal
    Riegler, Michael A.
    DIAGNOSTICS, 2023, 13 (14)
  • [26] Deep-Learning-Based Scalp Image Analysis Using Limited Data
    Kim, Minjeong
    Gil, Yujung
    Kim, Yuyeon
    Kim, Jihie
    ELECTRONICS, 2023, 12 (06)
  • [27] Using Deep-Learning-based Memory Analysis for Malware Detection in Cloud
    Li, Huhua
    Zhan, Dongyang
    Liu, Tianrui
    Ye, Lin
    2019 IEEE 16TH INTERNATIONAL CONFERENCE ON MOBILE AD HOC AND SENSOR SYSTEMS WORKSHOPS (MASSW 2019), 2019, : 1 - 6
  • [28] Deep-learning-based fringe-pattern analysis with uncertainty estimation
    Feng, Shijie
    Zuo, Chao
    Hu, Yan
    Li, Yixuan
    Chen, Qian
    OPTICA, 2021, 8 (12): : 1507 - 1510
  • [29] Deep-Learning-Based Multivariate Pattern Analysis (dMVPA): A Tutorial and a Toolbox
    Kuntzelman, Karl M.
    Williams, Jacob M.
    Lim, Phui Cheng
    Samal, Ashok
    Rao, Prahalada K.
    Johnson, Matthew R.
    FRONTIERS IN HUMAN NEUROSCIENCE, 2021, 15
  • [30] Deep-Learning-Based Action and Trajectory Analysis for Museum Security Videos
    Di Maio, Christian
    Nunziati, Giacomo
    Mecocci, Alessandro
    ELECTRONICS, 2024, 13 (07)