Dense Depth Estimation in Monocular Endoscopy With Self-Supervised Learning Methods

被引:85
|
作者
Liu, Xingtong [1 ]
Sinha, Ayushi [1 ,2 ]
Ishii, Masaru [3 ]
Hager, Gregory D. [1 ]
Reiter, Austin [1 ,4 ]
Taylor, Russell H. [1 ]
Unberath, Mathias [1 ]
机构
[1] Johns Hopkins Univ, Dept Comp Sci, Baltimore, MD 21287 USA
[2] Philips Res, Cambridge, MA 02141 USA
[3] Johns Hopkins Med Inst, Dept Otolaryngol Head & Neck Surg, Baltimore, MD 21224 USA
[4] Facebook, New York, NY 10003 USA
关键词
Estimation; Endoscopes; Cameras; Videos; Training; Image reconstruction; Three-dimensional displays; Endoscopy; unsupervised learning; self-supervised learning; depth estimation; NAVIGATION SYSTEM; VISION;
D O I
10.1109/TMI.2019.2950936
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
We present a self-supervised approach to training convolutional neural networks for dense depth estimation from monocular endoscopy data without a priori modeling of anatomy or shading. Our method only requires monocular endoscopic videos and a multi-view stereo method, e.g., structure from motion, to supervise learning in a sparse manner. Consequently, our method requires neither manual labeling nor patient computed tomography (CT) scan in the training and application phases. In a cross-patient experiment using CT scans as groundtruth, the proposed method achieved submillimeter mean residual error. In a comparison study to recent self-supervised depth estimation methods designed for natural video on in vivo sinus endoscopy data, we demonstrate that the proposed approach outperforms the previous methods by a large margin. The source code for this work is publicly available online at https://github.com/lppllppl920/EndoscopyDepthEstimation-Pytorch.
引用
收藏
页码:1438 / 1447
页数:10
相关论文
共 50 条
  • [1] Self-supervised Learning for Dense Depth Estimation in Monocular Endoscopy
    Liu, Xingtong
    Sinha, Ayushi
    Unberath, Mathias
    Ishii, Masaru
    Hager, Gregory D.
    Taylor, Russell H.
    Reiter, Austin
    [J]. OR 2.0 CONTEXT-AWARE OPERATING THEATERS, COMPUTER ASSISTED ROBOTIC ENDOSCOPY, CLINICAL IMAGE-BASED PROCEDURES, AND SKIN IMAGE ANALYSIS, OR 2.0 2018, 2018, 11041 : 128 - 138
  • [2] Self-supervised monocular depth estimation for gastrointestinal endoscopy
    Liu, Yuying
    Zuo, Siyang
    [J]. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2023, 238
  • [3] Self-supervised monocular depth estimation with direct methods
    Wang H.
    Sun Y.
    Wu Q.M.J.
    Lu X.
    Wang X.
    Zhang Z.
    [J]. Neurocomputing, 2021, 421 : 340 - 348
  • [4] Self-supervised monocular depth estimation with direct methods
    Wang, Haixia
    Sun, Yehao
    Wu, Q. M. Jonathan
    Lu, Xiao
    Wang, Xiuling
    Zhang, Zhiguo
    [J]. NEUROCOMPUTING, 2021, 421 : 340 - 348
  • [5] Self-supervised monocular image depth learning and confidence estimation
    Chen, Long
    Tang, Wen
    Wan, Tao Ruan
    John, Nigel W.
    [J]. NEUROCOMPUTING, 2020, 381 : 272 - 281
  • [6] Self-supervised Monocular Pose and Depth Estimation for Wireless Capsule Endoscopy with Transformers
    Nazifi, Nahid
    Araujo, Helder
    Erabati, Gopi Krishna
    Tahri, Omar
    [J]. IMAGE-GUIDED PROCEDURES, ROBOTIC INTERVENTIONS, AND MODELING, MEDICAL IMAGING 2024, 2024, 12928
  • [7] Digging Into Self-Supervised Monocular Depth Estimation
    Godard, Clement
    Mac Aodha, Oisin
    Firman, Michael
    Brostow, Gabriel
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 3827 - 3837
  • [8] On the uncertainty of self-supervised monocular depth estimation
    Poggi, Matteo
    Aleotti, Filippo
    Tosi, Fabio
    Mattoccia, Stefano
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 3224 - 3234
  • [9] Revisiting Self-supervised Monocular Depth Estimation
    Kim, Ue-Hwan
    Lee, Gyeong-Min
    Kim, Jong-Hwan
    [J]. ROBOT INTELLIGENCE TECHNOLOGY AND APPLICATIONS 6, 2022, 429 : 336 - 350
  • [10] Self-supervised monocular depth estimation in fog
    Tao, Bo
    Hu, Jiaxin
    Jiang, Du
    Li, Gongfa
    Chen, Baojia
    Qian, Xinbo
    [J]. OPTICAL ENGINEERING, 2023, 62 (03)