Augmenting efficient real-time surgical instrument segmentation in video with point tracking and Segment Anything

被引:0
|
作者
Wu, Zijian [1 ]
Schmidt, Adam [1 ]
Kazanzides, Peter [2 ]
Salcudean, Septimiu E. [1 ]
机构
[1] Univ British Columbia, Dept Elect & Comp Engn, Robot & Control Lab, Vancouver, BC V6T 1Z4, Canada
[2] Johns Hopkins Univ, Dept Comp Sci, Baltimore, MD USA
基金
加拿大创新基金会;
关键词
medical robotics; robot vision; image segmentation; surgery; RECOGNITION;
D O I
10.1049/htl2.12111
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
The Segment Anything model (SAM) is a powerful vision foundation model that is revolutionizing the traditional paradigm of segmentation. Despite this, a reliance on prompting each frame and large computational cost limit its usage in robotically assisted surgery. Applications, such as augmented reality guidance, require little user intervention along with efficient inference to be usable clinically. This study addresses these limitations by adopting lightweight SAM variants to meet the efficiency requirement and employing fine-tuning techniques to enhance their generalization in surgical scenes. Recent advancements in tracking any point have shown promising results in both accuracy and efficiency, particularly when points are occluded or leave the field of view. Inspired by this progress, a novel framework is presented that combines an online point tracker with a lightweight SAM model that is fine-tuned for surgical instrument segmentation. Sparse points within the region of interest are tracked and used to prompt SAM throughout the video sequence, providing temporal consistency. The quantitative results surpass the state-of-the-art semi-supervised video object segmentation method XMem on the EndoVis 2015 dataset with 84.8 IoU and 91.0 Dice. The method achieves promising performance that is comparable to XMem and transformer-based fully supervised segmentation methods on ex vivo UCL dVRK and in vivo CholecSeg8k datasets. In addition, the proposed method shows promising zero-shot generalization ability on the label-free STIR dataset. In terms of efficiency, the method was tested on a single GeForce RTX 4060/4090 GPU respectively, achieving an over 25/90 FPS inference speed. Code is available at: .
引用
收藏
页数:9
相关论文
共 50 条
  • [21] Efficient real-time trajectory tracking
    Ralph Lange
    Frank Dürr
    Kurt Rothermel
    The VLDB Journal , 2011, 20 : 671 - 694
  • [22] Automatic Video Segmentation and Object Tracking with Real-Time RGB-D Data
    Chen, I-Kuei
    Hsu, Szu-Lu
    Chi, Chung-Yu
    Chen, Liang-Gee
    2014 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS (ICCE), 2014, : 488 - 489
  • [23] Efficient real-time trajectory tracking
    Lange, Ralph
    Duerr, Frank
    Rothermel, Kurt
    VLDB JOURNAL, 2011, 20 (05): : 671 - 694
  • [24] A fast algorithm for real-time video tracking
    Shi-xu, Shi
    Qi-lun, Zheng
    Han, Huang
    IITA 2007: WORKSHOP ON INTELLIGENT INFORMATION TECHNOLOGY APPLICATION, PROCEEDINGS, 2007, : 120 - 124
  • [25] Real-Time Logo Detection and Tracking in Video
    George, M.
    Kehtarnavaz, N.
    Rahman, M.
    Carlsohn, M.
    REAL-TIME IMAGE AND VIDEO PROCESSING 2010, 2010, 7724
  • [26] VIDEO DATA CONVERSION AND REAL-TIME TRACKING
    GILBERT, AL
    COMPUTER, 1981, 14 (08) : 50 - 56
  • [27] REAL-TIME FACE ALIGNMENT WITH TRACKING IN VIDEO
    Su, Yanchao
    Ai, Haizhou
    Lao, Shihong
    2008 15TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1-5, 2008, : 1632 - 1635
  • [28] Real-time Moving Object Tracking in Video
    Kodjo, Amedome Min-Dianey
    Yang Jinhua
    2012 INTERNATIONAL CONFERENCE ON OPTOELECTRONICS AND MICROELECTRONICS (ICOM), 2012, : 580 - 584
  • [29] Person real-time tracking for video communication
    Xu, YH
    Jia, YD
    Liu, WC
    REAL-TIME IMAGING V, 2001, 4303 : 35 - 42
  • [30] Real-Time Probabilistic Tracking of Faces in Video
    Boccignone, Giuseppe
    Campadelli, Paola
    Ferrari, Alessandro
    Lipori, Giuseppe
    IMAGE ANALYSIS AND PROCESSING - ICIAP 2009, PROCEEDINGS, 2009, 5716 : 672 - 681