Vision-Based Suture Tensile Force Estimation in Robotic Surgery

被引:19
|
作者
Jung, Won-Jo [1 ]
Kwak, Kyung-Soo [1 ]
Lim, Soo-Chul [1 ]
机构
[1] Dongguk Univ, Dept Mech Robot & Energy Engn, 30 Pildong Ro 1gil, Seoul 04620, South Korea
基金
新加坡国家研究基金会;
关键词
force estimation; interaction force; neural networks; machine learning; minimally invasive surgery; suture tensile force; FEEDBACK; DEFORMATION;
D O I
10.3390/s21010110
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Compared to laparoscopy, robotics-assisted minimally invasive surgery has the problem of an absence of force feedback, which is important to prevent a breakage of the suture. To overcome this problem, surgeons infer the suture force from their proprioception and 2D image by comparing them to the training experience. Based on this idea, a deep-learning-based method using a single image and robot position to estimate the tensile force of the sutures without a force sensor is proposed. A neural network structure with a modified Inception Resnet-V2 and Long Short Term Memory (LSTM) networks is used to estimate the suture pulling force. The feasibility of proposed network is verified using the generated DB, recording the interaction under the condition of two different artificial skins and two different situations (in vivo and in vitro) at 13 viewing angles of the images by changing the tool positions collected from the master-slave robotic system. From the evaluation conducted to show the feasibility of the interaction force estimation, the proposed learning models successfully estimated the tensile force at 10 unseen viewing angles during training.
引用
收藏
页码:1 / 13
页数:13
相关论文
共 50 条
  • [41] Usability Studies of an Egocentric Vision-Based Robotic Wheelchair
    Kutbi, Mohammed
    Du, Xiaoxue
    Chang, Yizhe
    Sun, Bo
    Agadakos, Nikolaos
    Li, Haoxiang
    Hua, Gang
    Mordohai, Philippos
    ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION, 2020, 10 (01)
  • [42] Vision-Based Indoor Positioning of a Robotic Vehicle With a Floorplan
    Noonan, John
    Rotstein, Hector
    Geva, Amir
    Rivlin, Ehud
    2018 NINTH INTERNATIONAL CONFERENCE ON INDOOR POSITIONING AND INDOOR NAVIGATION (IPIN 2018), 2018,
  • [43] Vision-based control of robotic manipulator for citrus harvesting
    Mehta, S. S.
    Burks, T. F.
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2014, 102 : 146 - 158
  • [44] Vision-Based Robotic Traversal of Textureless Smooth Surfaces
    Keenan, Patrick
    Janabi-Sharifi, Farrokh
    Assa, Akbar
    IEEE TRANSACTIONS ON ROBOTICS, 2020, 36 (04) : 1287 - 1306
  • [45] An implementation of a novel vision-based robotic tracking system
    Talu, M. Fatih
    Soyguder, Servet
    Aydogmus, Oemuer
    SENSOR REVIEW, 2010, 30 (03) : 225 - 232
  • [46] A vision-based robotic system for precision pollination of apples
    Bhattarai, Uddhav
    Sapkota, Ranjan
    Kshetri, Safal
    Mo, Changki
    Whiting, Matthew D.
    Zhang, Qin
    Karkee, Manoj
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2025, 234
  • [47] An approach to vision-based person detection in robotic applications
    Castillo, C
    Chang, C
    PATTERN RECOGNITION AND IMAGE ANALYSIS, PT 1, PROCEEDINGS, 2005, 3522 : 209 - 216
  • [48] Intelligent Lighting Control for Vision-Based Robotic Manipulation
    Chen, S. Y.
    Zhang, Jianwei
    Zhang, Houxiang
    Kwok, N. M.
    Li, Y. F.
    IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2012, 59 (08) : 3254 - 3263
  • [49] Robotic wheelchair controlled through a vision-based interface
    Perez, Elisa
    Soria, Carlos
    Nasisi, Oscar
    Bastos, Teodiano Freire
    Mut, Vicente
    ROBOTICA, 2012, 30 : 691 - 708
  • [50] Enabling Vision-Based Services with a Cloud Robotic System
    Huang, Jhih-Yuan
    Lee, Wei-Po
    2016 ASIA-PACIFIC CONFERENCE ON INTELLIGENT ROBOT SYSTEMS (ACIRS 2016), 2016, : 84 - 88