Multimodal deformable registration based on unsupervised learning

被引:0
|
作者
Ma T. [1 ]
Li Z. [1 ]
Liu R. [1 ]
Fan X. [1 ]
Luo Z. [1 ]
机构
[1] International School of Information Science & Engineering, Dalian University of Technology, Dalian
基金
中国国家自然科学基金;
关键词
Computer vision; Deep learning; Medical image registration; Multimodal; Unsupervised;
D O I
10.13700/j.bh.1001-5965.2020.0449
中图分类号
学科分类号
摘要
Multimodal deformable registration is designed to solve dense spatial transformations and is used to align images of two different modalities. It is a key issue in many medical image analysis applications. Multimodal image registration based on traditional methods aims to solve the optimization problem of each pair of images, and usually achieves excellent registration performance, but the calculation cost is high and the running time is long. The deep learning method greatly reduces the running time by learning the network used to perform registration. These learning-based methods are very effective for single-modality registration. However, the intensity distribution of different modal images is unknown and complex. Most existing methods rely heavily on label data. Faced with these challenges, this paper proposes a deep multimodal registration framework based on unsupervised learning. Specifically, the framework consists of feature learning based on matching amount and deformation field learning based on maximum posterior probability, and realizes unsupervised training by means of spatial conversion function and differentiable mutual information loss function. In the 3D image registration tasks of MRI T1, MRI T2 and CT, the proposed method is compared with the existing advanced multi-modal registration methods. In addition, the registration performance of the proposed method is demonstrated on the latest COVID-19 CT data. A large number of results show that the proposed method has a competitive advantage in registration accuracy compared with other methods, and greatly reduces the calculation time. © 2021, Editorial Board of JBUAA. All right reserved.
引用
收藏
页码:658 / 664
页数:6
相关论文
共 24 条
  • [11] LIU R S, LI Z, ZHANG Y X, Et al., Bi-level probabilistic feature learning for deformable image registration, International Joint Conference on Artificial Intelligence, pp. 723-730, (2020)
  • [12] JADERBERG M, SIMONYAN K, KAVUKCUOGLU K, Et al., Spatial transformer networks, Conference and Workshop on Neural Information Processing Systems, pp. 2017-2025, (2015)
  • [13] SUN D, YANG X, LIU M, Et al., PWC-NET: CNNs for optical flow using pyramid, warping, and cost volume, IEEE Conference on Computer Vision and Pattern Recognition, pp. 8934-8943, (2018)
  • [14] ZACH C, POCK T, BISCHOF H., A duality based approach for real TV-L1 optical flow, Pattern Recognition, 4713, pp. 214-223, (2007)
  • [15] XU J L, ZHOU Y M, CHEN L, Et al., An unsupervised feature selection approach based on mutual information, Journal of Computer Research and Development, 49, 2, pp. 158-168, (2012)
  • [16] MAHAPATRA D, ANTONY B, SEDAI S., Deformable medical image registration using generative adversarial networks, Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging, pp. 1449-1453, (2018)
  • [17] MUELLER S G, WEINER M W, THAL L J, Et al., Ways toward an early diagnosis in Alzheimer's disease: The Alzheimer's disease neuroimaging initiative(ADNI), Alzheimer's & Dementia, 1, 1, pp. 55-66, (2005)
  • [18] KISER K J, AHMED S, STIEB S M, Et al., Data from the thoracic volume and pleural effusion segmentations in diseased lungs for benchmarking chest ct processing pipelines
  • [19] AERTS H J W L, WEE L, VELAZQUEZ E R, Et al., Data from NSCLC-radiomics
  • [20] CHILAMKURTHY S, GHOSH R, TANAMALA S, Et al., Deep learning algorithms for detection of critical findings in head CT scans: A retrospective study, Lancet, 392, 10162, pp. 2388-2396, (2018)