Towards Autonomous Retinal Microsurgery Using RGB-D Images

被引:0
|
作者
Kim, Ji Woong [1 ]
Wei, Shuwen [2 ]
Zhang, Peiyao [1 ]
Gehlbach, Peter [3 ]
Kang, Jin U. [2 ]
Iordachita, Iulian [1 ]
Kobilarov, Marin [1 ]
机构
[1] Johns Hopkins Univ, Mech Engn Dept, Baltimore, MD 21218 USA
[2] Johns Hopkins Univ, Elect & Comp Engn Dept, Baltimore, MD 21218 USA
[3] Johns Hopkins Wilmer Eye Inst, Baltimore, MD 21287 USA
基金
美国国家卫生研究院;
关键词
Computer vision for medical robotics; medical robots and systems; vision-based navigation;
D O I
10.1109/LRA.2024.3368192
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Retinal surgery is a challenging procedure requiring precise manipulation of the fragile retinal tissue, often at the scale of tens-of-micrometers. Its difficulty has motivated the development of robotic assistance platforms to enable precise motion, and more recently, novel sensors such as microscope integrated optical coherence tomography (OCT) for RGB-D view of the surgical workspace. The combination of these devices opens new possibilities for robotic automation of tasks such as subretinal injection (SI), a procedure that involves precise needle insertion into the retina for targeted drug delivery. Motivated by this opportunity, we develop a framework for autonomous needle navigation during SI. We develop a system which enables the surgeon to specify waypoint goals in the microscope and OCT views, and the system autonomously navigates the needle to the desired subretinal space in real-time. Our system integrates OCT and microscope images with convolutional neural networks (CNNs) to automatically segment the surgical tool and retinal tissue boundaries, and model predictive control that generates optimal trajectories that respect kinematic constraints to ensure patient safety. We validate our system by demonstrating 30 successful SI trials on pig eyes. Preliminary comparisons to a human operator in robot-assisted mode highlight the enhanced safety and performance of our system.
引用
收藏
页码:3807 / 3814
页数:8
相关论文
共 50 条
  • [1] Robust Localization Using RGB-D Images
    Oh, Yoonseon
    Oh, Songhwai
    2014 14TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS 2014), 2014, : 1023 - 1026
  • [2] REFLECTION REMOVAL USING RGB-D IMAGES
    Shibata, Toshihiro
    Akai, Yuji
    Matsuoka, Ryo
    2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2018, : 1862 - 1866
  • [3] RGB-D IBR: Rendering Indoor Scenes Using Sparse RGB-D Images with Local Alignments
    Jeong, Yeongyu
    Kim, Haejoon
    Seo, Hyewon
    Cordier, Frederic
    Lee, Seungyong
    PROCEEDINGS I3D 2016: 20TH ACM SIGGRAPH SYMPOSIUM ON INTERACTIVE 3D GRAPHICS AND GAMES, 2016, : 205 - 206
  • [4] Indoor Human Detection using RGB-D images
    Li, Baopu
    Jin, Haoyang
    Zhang, Qi
    Xia, Wei
    Li, Huiyun
    2016 IEEE INTERNATIONAL CONFERENCE ON INFORMATION AND AUTOMATION (ICIA), 2016, : 1354 - 1360
  • [5] Accurate Pouring with an Autonomous Robot Using an RGB-D Camera
    Do, Chau
    Burgard, Wolfram
    INTELLIGENT AUTONOMOUS SYSTEMS 15, IAS-15, 2019, 867 : 210 - 221
  • [6] Domain adaptation from RGB-D to RGB images
    Li, Xiao
    Fang, Min
    Zhang, Ju-Jie
    Wu, Jinqiao
    SIGNAL PROCESSING, 2017, 131 : 27 - 35
  • [7] Incremental Registration of RGB-D Images
    Dryanovski, Ivan
    Jaramillo, Carlos
    Xiao, Jizhong
    2012 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2012, : 1685 - 1690
  • [8] Unsupervised Segmentation of RGB-D Images
    Deng, Zhuo
    Latecki, Longin Jan
    COMPUTER VISION - ACCV 2014, PT III, 2015, 9005 : 423 - 435
  • [9] RGB-D Odometry for Autonomous Lawn Mowing
    Ochman, Marcin
    Skoczen, Magda
    Krata, Damian
    Panek, Marcin
    Spyra, Krystian
    Pawlowski, Andrzej
    ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING (ICAISC 2021), PT II, 2021, 12855 : 81 - 90
  • [10] Keypoint Detection in RGB-D Images Using Binary Patterns
    Romero-Gonzalez, Cristina
    Martinez-Gomez, Jesus
    Garcia-Varea, Ismael
    Rodriguez-Ruiz, Luis
    ROBOT 2015: SECOND IBERIAN ROBOTICS CONFERENCE: ADVANCES IN ROBOTICS, VOL 2, 2016, 418 : 685 - 694