Segmentation-guided multi-modal registration of liver images for dose estimation in SIRT

被引:3
|
作者
Tang, Xikai [1 ,6 ]
Rangraz, Esmaeel Jafargholi [8 ]
Heeren, Richard's [2 ]
Coudyzer, Walter [4 ]
Maleux, Geert [2 ,4 ]
Baete, Kristof [1 ,3 ]
Verslype, Chris [5 ]
Gooding, Mark J. [7 ]
Deroose, Christophe M. [1 ,3 ]
Nuyts, Johan [1 ,6 ]
机构
[1] Katholieke Univ Leuven, Nucl Med & Mol Imaging, Leuven, Belgium
[2] Katholieke Univ Leuven, Radiol, Leuven, Belgium
[3] Univ Hosp Leuven, Nucl Med, Leuven, Belgium
[4] Univ Hosp Leuven, Radiol, Leuven, Belgium
[5] Univ Hosp Leuven, Digest Oncol, Leuven, Belgium
[6] Med Imaging Res Ctr MIRC, UZ Herestr 49,Box 7003, B-3000 Leuven, Belgium
[7] Mirada Med Ltd, Oxford, England
[8] Quirem Med BV, Deventer, Netherlands
基金
欧盟地平线“2020”;
关键词
Selective internal radiation therapy (SIRT); Liver registration; Convolutional neural network (CNN); Internal dosimetry; Multi-modality images; FOLLOW-UP; CT; MR;
D O I
10.1186/s40658-022-00432-8
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Purpose Selective internal radiation therapy (SIRT) requires a good liver registration of multi-modality images to obtain precise dose prediction and measurement. This study investigated the feasibility of liver registration of CT and MR images, guided by segmentation of the liver and its landmarks. The influence of the resulting lesion registration on dose estimation was evaluated. Methods The liver segmentation was done with a convolutional neural network (CNN), and the landmarks were segmented manually. Our image-based registration software and its liver-segmentation-guided extension (CNN-guided) were tuned and evaluated with 49 CT and 26 MR images from 20 SIRT patients. Each liver registration was evaluated by the root mean square distance (RMSD) of mean surface distance between manually delineated liver contours and mass center distance between manually delineated landmarks (lesions, clips, etc.). The root mean square of RMSDs (RRMSD) was used to evaluate all liver registrations. The CNN-guided registration was further extended by incorporating landmark segmentations (CNN&LM-guided) to assess the value of additional landmark guidance. To evaluate the influence of segmentation-guided registration on dose estimation, mean dose and volume percentages receiving at least 70 Gy (V70) estimated on the Tc-99m-labeled macro-aggregated albumin (Tc-99m-MAA) SPECT were computed, either based on lesions from the reference Tc-99m-MAA CT (reference lesions) or from the registered floating CT or MR images (registered lesions) using the CNN- or CNN&LM-guided algorithms. Results The RRMSD decreased for the floating CTs and MRs by 1.0 mm (11%) and 3.4 mm (34%) using CNN guidance for the image-based registration and by 2.1 mm (26%) and 1.4 mm (21%) using landmark guidance for the CNN-guided registration. The quartiles for the relative mean dose difference (the V70 difference) between the reference and registered lesions and their correlations [25th, 75th; r] are as follows: [- 5.5% (- 1.3%), 5.6% (3.4%); 0.97 (0.95)] and [- 12.3% (- 2.1%), 14.8% (2.9%); 0.96 (0.97)] for the CNN&LM- and CNN-guided CT to CT registrations, [- 7.7% (- 6.6%), 7.0% (3.1%); 0.97 (0.90)] and [- 15.1% (- 11.3%), 2.4% (2.5%); 0.91 (0.78)] for the CNN&LM- and CNN-guided MR to CT registrations. Conclusion Guidance by CNN liver segmentations and landmarks markedly improves the performance of the image-based registration. The small mean dose change between the reference and registered lesions demonstrates the feasibility of applying the CNN&LM- or CNN-guided registration to volume-level dose prediction. The CNN&LM- and CNN-guided registrations for CTs can be applied to voxel-level dose prediction according to their small V70 change for most lesions. The CNN-guided MR to CT registration still needs to incorporate landmark guidance for smaller change of voxel-level dose estimation.
引用
下载
收藏
页数:20
相关论文
共 50 条
  • [21] Gradient intensity-based registration of multi-modal images of the brain
    Shams, Ramtin
    Kennedy, Rodney A.
    Sadeghi, Parastoo
    Hartley, Richard
    2007 IEEE 11TH INTERNATIONAL CONFERENCE ON COMPUTER VISION, VOLS 1-6, 2007, : 1971 - 1978
  • [22] Unsupervised Deformable Registration for Multi-modal Images via Disentangled Representations
    Qin, Chen
    Shi, Bibo
    Liao, Rui
    Mansi, Tommaso
    Rueckert, Daniel
    Kamen, Ali
    INFORMATION PROCESSING IN MEDICAL IMAGING, IPMI 2019, 2019, 11492 : 249 - 261
  • [23] Deformable Rigid Body Hausdorff Registration for Multi-modal Medical Images
    Ahmad, Fahad Hameed
    Natarajan, Sudha
    TENCON 2009 - 2009 IEEE REGION 10 CONFERENCE, VOLS 1-4, 2009, : 1098 - 1103
  • [24] Multi-modal inter-subject registration of mouse brain images
    Li, Xia
    Yankeelov, Thomas E.
    Rosen, Glenn
    Gore, John C.
    Dawant, Benoit M.
    MEDICAL IMAGING 2006: IMAGE PROCESSING, PTS 1-3, 2006, 6144
  • [25] A Tri-Attention fusion guided multi-modal segmentation network
    Zhou, Tongxue
    Ruan, Su
    Vera, Pierre
    Canu, Stephane
    PATTERN RECOGNITION, 2022, 124
  • [26] SUPRA: Superpixel Guided Loss for Improved Multi-modal Segmentation in Endoscopy
    Martinez-Garcia-Pena, Rafael
    Teevno, Mansoor Ali
    Ochoa-Ruiz, Gilberto
    Ali, Sharib
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW, 2023, : 285 - 294
  • [27] OctopusNet: A Deep Learning Segmentation Network for Multi-modal Medical Images
    Chen, Yu
    Chen, Jiawei
    Wei, Dong
    Li, Yuexiang
    Zheng, Yefeng
    MULTISCALE MULTIMODAL MEDICAL IMAGING, MMMI 2019, 2020, 11977 : 17 - 25
  • [28] UNIVERSAL MULTI-MODAL DEEP NETWORK FOR CLASSIFICATION AND SEGMENTATION OF MEDICAL IMAGES
    Harouni, Ahmed
    Karargyris, Alexandros
    Negahdar, Mohammadreza
    Beymer, David
    Syeda-Mahmood, Tanveer
    2018 IEEE 15TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2018), 2018, : 872 - 876
  • [29] What Image Features Are Useful for Tumor Segmentation in Multi-Modal Images?
    Hu, Y.
    Grossberg, M.
    Mageras, G.
    MEDICAL PHYSICS, 2015, 42 (06) : 3213 - 3213
  • [30] EISNet: A Multi-Modal Fusion Network for Semantic Segmentation With Events and Images
    Xie, Bochen
    Deng, Yongjian
    Shao, Zhanpeng
    Li, Youfu
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 8639 - 8650