Deep learning framework for interpretable quality control of echocardiography video

被引:0
|
作者
Du, Liwei [1 ]
Xue, Wufeng [1 ]
Qi, Zhanru [2 ]
Shi, Zhongqing [2 ]
Guo, Guanjun [2 ]
Yang, Xin [1 ]
Ni, Dong [1 ]
Yao, Jing [2 ,3 ,4 ]
机构
[1] Shenzhen Univ, Med Sch, Sch Biomed Engn, Shenzhen, Peoples R China
[2] Nanjing Univ, Affiliated Hosp, Dept Ultrasound Med, Med Sch, Nanjing, Peoples R China
[3] Nanjing Univ, Affiliated Hosp, Med Imaging Ctr, Med Sch, Nanjing, Peoples R China
[4] Yizheng Hosp, Nanjing Drum Tower Hosp Grp, Yangzhou, Peoples R China
关键词
echocardiography video; multitask network; quality control; real-time; visualized explanation;
D O I
10.1002/mp.17722
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Background: Echocardiography (echo) has become an indispensable tool in modern cardiology, offering real-time imaging that helps clinicians evaluate heart function and identify abnormalities. Despite these advantages, the acquisition of high-quality echo is time-consuming, labor-intensive, and highly subjective. Purpose: The objective of this study is to introduce a comprehensive system for the automated quality control (QC) of echo videos. This system focuses on real-time monitoring of key imaging parameters, reducing the variability associated with manual QC processes. Methods: Our multitask network analyzes cardiac cycle integrity, anatomical structures (AS), depth, cardiac axis angle (CAA), and gain. The network consists of a shared convolutional neural network (CNN) backbone for spatial feature extraction, along with three additional modules: (1) a bidirectional long short-term memory (Bi-LSTM) phase analysis (PA) module for detecting cardiac cycles and QC targets; (2) an oriented object detection head for AS analysis and depth/CAA quantification; and (3) a classification head for gain analysis. The model was trained and tested on a dataset of 1331 echo videos. Through model inference, a comprehensive score is generated, offering easily interpretable insights. Results: The model achieved a mean average precision of 0.962 for AS detection, with PA yielding average frame errors of 1.603 +/-+/- 1.181 (end-diastolic) and 1.681 +/-+/- 1.332 (end-systolic). The gain classification model demonstrated robust performance (Area Under the Curve > 0.98), and the overall processing speed reached 112.4 frames per second. On 203 randomly collected echo videos, the model achieved a kappa coefficient of 0.79 for rating consistency compared to expert evaluations CONCLUSIONS: Given the model's performance on the clinical dataset and its consistency with expert evaluations, our results indicate that the model not only delivers real-time, interpretable quality scores but also demonstrates strong clinical reliability.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Interpretable video tag recommendation with multimedia deep learning framework
    Yang, Zekun
    Lin, Zhijie
    INTERNET RESEARCH, 2022, 32 (02) : 518 - 535
  • [2] Building a Scalable and Interpretable Bayesian Deep Learning Framework for Quality Control of Free Form Surfaces
    Sinha, Sumit
    Franciosa, Pasquale
    Ceglarek, Dariusz
    IEEE ACCESS, 2021, 9 : 50188 - 50208
  • [3] An Interpretable Modular Deep Learning Framework for Video-Based Fall Detection
    Dutt, Micheal
    Gupta, Aditya
    Goodwin, Morten
    Omlin, Christian W.
    APPLIED SCIENCES-BASEL, 2024, 14 (11):
  • [4] A deep learning framework for quality assessment and restoration in video endoscopy
    Ali, Sharib
    Zhou, Felix
    Bailey, Adam
    Braden, Barbara
    East, James E.
    Lu, Xin
    Rittscher, Jens
    MEDICAL IMAGE ANALYSIS, 2021, 68
  • [5] Deep Staging: An Interpretable Deep Learning Framework for Disease Staging
    Yao, Liuyi
    Yao, Zijun
    Hu, Jianying
    Gao, Jing
    Sun, Zhaonan
    2021 IEEE 9TH INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI 2021), 2021, : 130 - 137
  • [6] Cropformer: An interpretable deep learning framework for crop genomic prediction
    Wang, Hao
    Yan, Shen
    Wang, Wenxi
    Chen, Yongming
    Hong, Jingpeng
    He, Qiang
    Diao, Xianmin
    Lin, Yunan
    Chen, Yanqing
    Cao, Yongsheng
    Guo, Weilong
    Fang, Wei
    PLANT COMMUNICATIONS, 2025, 6 (03)
  • [7] HMCKRAutoEncoder: An Interpretable Deep Learning Framework for Time Series Analysis
    Wang, Jilong
    Li, Rui
    Li, Renfa
    Fu, Bin
    Chen, Danny Z.
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2022, 10 (01) : 99 - 111
  • [8] Fully interpretable deep learning model of transcriptional control
    Liu, Yi
    Barr, Kenneth
    Reinitz, John
    BIOINFORMATICS, 2020, 36 : 499 - 507
  • [9] Deep Learning and Video Quality Analysis
    Topiwala, P.
    Krishnan, M.
    Dai, W.
    APPLICATIONS OF DIGITAL IMAGE PROCESSING XLII, 2019, 11137
  • [10] A framework for vehicle quality evaluation based on interpretable machine learning
    Alwadi M.
    Chetty G.
    Yamin M.
    International Journal of Information Technology, 2023, 15 (1) : 129 - 136