Autonomous Neurosurgical Instrument Segmentation Using End-to-End Learning

被引:9
|
作者
Kalavakonda, Niveditha [1 ]
Hannaford, Blake [1 ]
Qazi, Zeeshan [2 ]
Sekhar, Laligam [2 ]
机构
[1] Univ Washington, Seattle, WA 98195 USA
[2] Harborview Med Ctr, Seattle, WA USA
关键词
APPEARANCE; TRACKING;
D O I
10.1109/CVPRW.2019.00076
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Monitoring surgical instruments is an essential task in computer-assisted interventions and surgical robotics. It is also important for navigation, data analysis, skill assessment and surgical workflow analysis in conventional surgery. However, there are no standard datasets and benchmarks for tool identification in neurosurgery. To this end, we are releasing a novel neurosurgical instrument segmentation dataset called NeurolD for advancing research in the field. Delineating surgical tools from the background requires accurate pixel-wise instrument segmentation. In this paper, we present a comparison between three encoder-decoder approaches to binary segmentation of neurosurgical instruments, where we classify each pixel in the image to be either tool or background. A baseline performance was obtained by using heuristics to combine extracted features. We also extend the analysis to a publicly available robotic instrument segmentation dataset and include its results. The source code for our methods and the neurosurgical instrument dataset will be made publicly available(1) to facilitate reproducibility.
引用
收藏
页码:514 / 516
页数:3
相关论文
共 50 条
  • [41] End-to-End Autonomous Driving Decision Based on Deep Reinforcement Learning
    Huang Z.-Q.
    Qu Z.-W.
    Zhang J.
    Zhang Y.-X.
    Tian R.
    Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2020, 48 (09): : 1711 - 1719
  • [42] End-to-end deep learning for reverse driving trajectory of autonomous bulldozer
    You, Ke
    Ding, Lieyun
    Jiang, Yutian
    Wu, Zhangang
    Zhou, Cheng
    KNOWLEDGE-BASED SYSTEMS, 2022, 252
  • [43] FEELVOS: Fast End-to-End Embedding Learning for Video Object Segmentation
    Voigtlaender, Paul
    Chai, Yuning
    Schroff, Florian
    Adam, Hartwig
    Leibe, Bastian
    Chen, Liang-Chieh
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 9473 - 9482
  • [44] Fully and Weakly Supervised Referring Expression Segmentation With End-to-End Learning
    Li, Hui
    Sun, Mingjie
    Xiao, Jimin
    Lim, Eng Gee
    Zhao, Yao
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (10) : 5999 - 6012
  • [45] Correction to: An end-to-end differential network learning method for semantic segmentation
    Tai Hu
    Ming Yang
    Wanqi Yang
    Aishi Li
    International Journal of Machine Learning and Cybernetics, 2019, 10 : 1925 - 1925
  • [46] End-to-end learning of brain tissue segmentation from imperfect labeling
    Fedorov, Alex
    Johnson, Jeremy
    Damaraju, Eswar
    Ozerin, Alexei
    Calhoun, Vince
    Plis, Sergey
    2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 3785 - 3792
  • [47] End-to-End Adversarial Shape Learning for Abdomen Organ Deep Segmentation
    Cai, Jinzheng
    Xia, Yingda
    Yang, Dong
    Xu, Daguang
    Yang, Lin
    Roth, Holger
    MACHINE LEARNING IN MEDICAL IMAGING (MLMI 2019), 2019, 11861 : 124 - 132
  • [48] Learning to See the Invisible: End-to-End Trainable Amodal Instance Segmentation
    Follmann, Patrick
    Koenig, Rebecca
    Haertinger, Philipp
    Klostermann, Michael
    Boettger, Tobias
    2019 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2019, : 1328 - 1336
  • [49] Crowd Counting Using End-to-End Semantic Image Segmentation
    Khan, Khalil
    Khan, Rehan Ullah
    Albattah, Waleed
    Nayab, Durre
    Qamar, Ali Mustafa
    Habib, Shabana
    Islam, Muhammad
    ELECTRONICS, 2021, 10 (11)
  • [50] An End-to-End Network for Panoptic Segmentation
    Liu, Huanyu
    Peng, Chao
    Yu, Changqian
    Wang, Jingbo
    Liu, Xu
    Yu, Gang
    Jiang, Wei
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 6165 - 6174