Deep learning-based motion tracking using ultrasound images

被引:18
|
作者
Dai, Xianjin [1 ,2 ]
Lei, Yang [1 ,2 ]
Roper, Justin [1 ,2 ]
Chen, Yue [3 ,4 ]
Bradley, Jeffrey D. [1 ,2 ]
Curran, Walter J. [1 ,2 ]
Liu, Tian [1 ,2 ]
Yang, Xiaofeng [1 ,2 ,3 ,4 ]
机构
[1] Emory Univ, Dept Radiat Oncol, Atlanta, GA 30322 USA
[2] Emory Univ, Winship Canc Inst, Atlanta, GA 30322 USA
[3] Georgia Inst Technol, Wallace H Coulter Dept Biomed Engn, Atlanta, GA 30332 USA
[4] Emory Univ, Sch Med, Atlanta, GA 30322 USA
基金
美国国家卫生研究院;
关键词
deep learning; image-guided therapy; motion tracking; radiotherapy; ultrasound imaging; TIME TUMOR-TRACKING; RESPIRATORY MOTION; RADIATION-THERAPY; FIDUCIAL MARKERS; GUIDED RADIOTHERAPY; LIVER; REGISTRATION; FEASIBILITY; MANAGEMENT; SETUP;
D O I
10.1002/mp.15321
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Purpose Ultrasound (US) imaging is an established imaging modality capable of offering video-rate volumetric images without ionizing radiation. It has the potential for intra-fraction motion tracking in radiation therapy. In this study, a deep learning-based method has been developed to tackle the challenges in motion tracking using US imaging. Methods We present a Markov-like network, which is implemented via generative adversarial networks, to extract features from sequential US frames (one tracked frame followed by untracked frames) and thereby estimate a set of deformation vector fields (DVFs) through the registration of the tracked frame and the untracked frames. The positions of the landmarks in the untracked frames are finally determined by shifting landmarks in the tracked frame according to the estimated DVFs. The performance of the proposed method was evaluated on the testing dataset by calculating the tracking error (TE) between the predicted and ground truth landmarks on each frame. Results The proposed method was evaluated using the MICCAI CLUST 2015 dataset which was collected using seven US scanners with eight types of transducers and the Cardiac Acquisitions for Multi-structure Ultrasound Segmentation (CAMUS) dataset which was acquired using GE Vivid E95 ultrasound scanners. The CLUST dataset contains 63 2D and 22 3D US image sequences respectively from 42 and 18 subjects, and the CAMUS dataset includes 2D US images from 450 patients. On CLUST dataset, our proposed method achieved a mean tracking error of 0.70 +/- 0.38 mm for the 2D sequences and 1.71 +/- 0.84 mm for the 3D sequences for those public available annotations. And on CAMUS dataset, a mean tracking error of 0.54 +/- 1.24 mm for the landmarks in the left atrium was achieved. Conclusions A novel motion tracking algorithm using US images based on modern deep learning techniques has been demonstrated in this study. The proposed method can offer millimeter-level tumor motion prediction in real time, which has the potential to be adopted into routine tumor motion management in radiation therapy.
引用
收藏
页码:7747 / 7756
页数:10
相关论文
共 50 条
  • [31] Deep learning-based lung cancer detection using CT images
    Mariappan, Suguna
    Moses, Diana
    [J]. International Journal of Ad Hoc and Ubiquitous Computing, 2024, 47 (03) : 143 - 157
  • [32] Deep Learning-based Concrete Crack Detection Using Hybrid Images
    An, Yun-Kyu
    Jang, Keunyoung
    Kim, Byunghyun
    Cho, Soojin
    [J]. SENSORS AND SMART STRUCTURES TECHNOLOGIES FOR CIVIL, MECHANICAL, AND AEROSPACE SYSTEMS 2018, 2018, 10598
  • [33] DEEP LEARNING-BASED DETECTION FOR TRANSMISSION TOWERS USING UAV IMAGES
    Wu, Huisheng
    Sun, Ruixue
    Ling, Xiaochun
    Zhong, Xianjin
    Gao, Xingguo
    [J]. 2022 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS 2022), 2022, : 3740 - 3743
  • [34] Learning-based needle tip tracking in 2D ultrasound by fusing visual tracking and motion prediction
    Yan, Wanquan
    Ding, Qingpeng
    Chen, Jianghua
    Yan, Kim
    Tang, Raymond Shing-Yan
    Cheng, Shing Shin
    [J]. MEDICAL IMAGE ANALYSIS, 2023, 88
  • [35] Deep learning-based detection of motion artifacts in probe-based confocal laser endomicroscopy images
    Aubreville, Marc
    Stoeve, Maike
    Oetter, Nicolai
    Goncalves, Miguel
    Knipfer, Christian
    Neumann, Helmut
    Bohr, Christopher
    Stelzle, Florian
    Maier, Andreas
    [J]. INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY, 2019, 14 (01) : 31 - 42
  • [36] Deep learning-based detection of motion artifacts in probe-based confocal laser endomicroscopy images
    Marc Aubreville
    Maike Stoeve
    Nicolai Oetter
    Miguel Goncalves
    Christian Knipfer
    Helmut Neumann
    Christopher Bohr
    Florian Stelzle
    Andreas Maier
    [J]. International Journal of Computer Assisted Radiology and Surgery, 2019, 14 : 31 - 42
  • [37] Deep Learning-based 3D Magnetic Microrobot Tracking using 2D MR Images
    Tiryaki, Mehmet Efe
    Demir, Sinan Ozgun
    Sitti, Metin
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (03) : 6982 - 6989
  • [38] Deep learning-based target tracking with X-ray images for radiotherapy: a narrative review
    Liu, Xi
    Geng, Li-Sheng
    Huang, David
    Cai, Jing
    Yang, Ruijie
    [J]. QUANTITATIVE IMAGING IN MEDICINE AND SURGERY, 2024, 14 (03) : 2671 - 2692
  • [39] Deep Learning-Based Computer-Aided Classification of Amniotic Fluid Using Ultrasound Images from Saudi Arabia
    Khan, Irfan Ullah
    Aslam, Nida
    Anis, Fatima M.
    Mirza, Samiha
    AlOwayed, Alanoud
    Aljuaid, Reef M.
    Bakr, Razan M.
    Al Qahtani, Nourah Hasan
    [J]. BIG DATA AND COGNITIVE COMPUTING, 2022, 6 (04)
  • [40] Deep Learning-Based Multifloor Indoor Tracking Scheme Using Smartphone Sensors
    Lin, Chenxiang
    Shin, Yoan
    [J]. IEEE ACCESS, 2022, 10 : 63049 - 63062