Visual servoing of laser ablation based cochleostomy

被引:6
|
作者
Kahrs, Lueder A. [1 ]
Raczkowsky, Joerg [1 ]
Werner, Martin [2 ]
Knapp, Felix B. [3 ]
Mehrwald, Markus [1 ]
Hering, Peter [2 ,4 ]
Schipper, Joerg [3 ]
Klenzner, Thomas [3 ]
Woern, Heinz [1 ]
机构
[1] Univ Karlsruhe TH, Inst Proc Control & Robot, Engler Bunte Ring 8, D-76131 Karlsruhe, Germany
[2] Caesar, D-53175 Bonn, Germany
[3] Univ Hosp, Dept Oto Rhino Laryngol, D-40225 Dusseldorf, Germany
[4] Univ Dusseldorf, Inst Laser Med, D-40001 Dusseldorf, Germany
关键词
image-guided therapy; monitoring and feedback; segmentation and rendering; calibration; localization & tracking technologies; medical robotics;
D O I
10.1117/12.770863
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
The aim of this study is a defined, visually based and camera controlled bone removal by a navigated CO2 laser on the promontory of the inner ear. A precise and minimally traumatic opening procedure of the cochlea for the implantation of a cochlear implant electrode (so-called cochleostomy) is intended. Harming the membrane linings of the inner ear can result in damage of remaining organ functions (e.g. complete deafness or vertigo). A precise tissue removal by a laser-based bone ablation system is investigated. Inside the borehole the pulsed laser beam is guided automatically over the bone by using a two mirror galvanometric scanner. The ablation process is controlled by visual servoing. For the detection of the boundary layers of the inner ear the ablation area is monitored by a color camera. The acquired pictures are analyzed by image processing. The results of this analysis are used to control the process of laser ablation. This publication describes the complete system including image processing algorithms and the concept for the resulting distribution of single laser pulses. The system has been tested on human cochleae in ex-vivo studies. Further developments could lead to safe intraoperative openings of the cochlea by a robot based surgical laser instrument.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] Visual servoing-based augmented reality
    Sundareswaran, V
    Behringer, R
    AUGMENTED REALITY: PLACING ARTIFICIAL OBJECTS IN REAL SCENES, 1999, : 193 - 200
  • [32] Robot visual servoing based on total Jacobian
    Zhao, QJ
    Sun, ZQ
    Deng, HB
    ADVANCES IN COMPUTER SCIENCE - ASIAN 2004, PROCEEDINGS, 2004, 3321 : 271 - 285
  • [33] Hybrid Visual Servoing for Autonomous Robotic Laser Tattoo Removal
    Penza, Veronica
    Salerno, Damiano
    Acemoglu, Alperen
    Ortiz, Jesus
    Mattos, Leonardo S.
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 4461 - 4466
  • [34] Visual servoing based on efficient histogram information
    Hajer Abidi
    Mohamed Chtourou
    Khaled Kaaniche
    Hassen Mekki
    International Journal of Control, Automation and Systems, 2017, 15 : 1746 - 1753
  • [35] A visual servoing system for edge trimming of fabric embroideries by laser
    Amin-Nejad, S
    Smith, JS
    Lucas, J
    MECHATRONICS, 2003, 13 (06) : 533 - 551
  • [36] Photometric Gaussian Mixtures based Visual Servoing
    Crombez, Nathan
    Caron, Guillaume
    Mouaddib, El Mustapha
    2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2015, : 5486 - 5491
  • [37] FlowControl: Optical Flow Based Visual Servoing
    Argus, Max
    Hermann, Lukas
    Long, Jon
    Brox, Thomas
    2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 7534 - 7541
  • [38] Uncalibrated image-based visual servoing
    Santamaria-Navarro, Angel
    Andrade-Cetto, Juan
    2013 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2013, : 5247 - 5252
  • [39] An Image-based Visual Servoing for Manipulator
    Tian, Shi-xiang
    Wang, Sheng-ze
    NEW TRENDS AND APPLICATIONS OF COMPUTER-AIDED MATERIAL AND ENGINEERING, 2011, 186 : 277 - 280
  • [40] A visual servoing system based on Adept robot
    Du, Jian-Jun
    Wang, Xue-Ying
    Zhao, Wan-Sheng
    Li, Cai-Hua
    Kongzhi Lilun Yu Yingyong/Control Theory and Applications, 2007, 24 (04): : 565 - 568