Human AI collaboration for unsupervised categorization of live surgical feedback

被引:1
|
作者
Kocielnik, Rafal [1 ]
Yang, Cherine H. [2 ]
Ma, Runzhuo [3 ]
Cen, Steven Y. [4 ]
Wong, Elyssa Y. [5 ]
Chu, Timothy N. [2 ]
Knudsen, J. Everett [2 ]
Wager, Peter [2 ]
Heard, John [2 ]
Ghaffar, Umar [2 ]
Anandkumar, Anima [1 ]
Hung, Andrew J. [2 ]
机构
[1] CALTECH, Comp Mathemat Sci, Pasadena, CA USA
[2] Cedars Sinai Med Ctr, Dept Urol, Los Angeles, CA 90048 USA
[3] Weill Cornell Med, New York Presbyterian Hosp, Dept Urol, New York, NY USA
[4] Univ Southern Calif, Keck Sch Med, Los Angeles, CA USA
[5] Univ Texas Southwestern Med Ctr, Dept Urol, Dallas, TX USA
来源
NPJ DIGITAL MEDICINE | 2024年 / 7卷 / 01期
关键词
OPERATING-ROOM;
D O I
10.1038/s41746-024-01383-3
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
Formative verbal feedback during live surgery is essential for adjusting trainee behavior and accelerating skill acquisition. Despite its importance, understanding optimal feedback is challenging due to the difficulty of capturing and categorizing feedback at scale. We propose a Human-AI Collaborative Refinement Process that uses unsupervised machine learning (Topic Modeling) with human refinement to discover feedback categories from surgical transcripts. Our discovered categories are rated highly for clinical clarity and are relevant to practice, including topics like "Handling and Positioning of (tissue)" and "(Tissue) Layer Depth Assessment and Correction [during tissue dissection]." These AI-generated topics significantly enhance predictions of trainee behavioral change, providing insights beyond traditional manual categorization. For example, feedback on "Handling Bleeding" is linked to improved behavioral change. This work demonstrates the potential of AI to analyze surgical feedback at scale, informing better training guidelines and paving the way for automated feedback and cueing systems in surgery.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] A human–AI collaboration workflow for archaeological sites detection
    Luca Casini
    Nicolò Marchetti
    Andrea Montanucci
    Valentina Orrù
    Marco Roccetti
    Scientific Reports, 13
  • [32] Adaptive trust calibration for human-AI collaboration
    Okamura, Kazuo
    Yamada, Seiji
    PLOS ONE, 2020, 15 (02):
  • [33] Enhancing human-AI collaboration: The case of colonoscopy
    Introzzi, Luca
    Zonca, Joshua
    Cabitza, Federico
    Cherubini, Paolo
    Reverberi, Carlo
    DIGESTIVE AND LIVER DISEASE, 2024, 56 (07) : 1131 - 1139
  • [34] Designing Transparency for Effective Human-AI Collaboration
    Voessing, Michael
    Kuehl, Niklas
    Lind, Matteo
    Satzger, Gerhard
    INFORMATION SYSTEMS FRONTIERS, 2022, 24 (03) : 877 - 895
  • [35] Human-AI Collaboration in a Student Discussion Forum
    Laney, Mason
    Dewan, Prasun
    COMPANION PROCEEDINGS OF 2024 29TH ANNUAL CONFERENCE ON INTELLIGENT USER INTERFACES, IUI 2024 COMPANION, 2024, : 74 - 77
  • [36] CHISSL: A Human-Machine Collaboration Space for Unsupervised Learning
    Arendt, Dustin
    Komurlu, Caner
    Blaha, Leslie M.
    AUGMENTED COGNITION: NEUROCOGNITION AND MACHINE LEARNING, AC 2017, PT I, 2017, 10284 : 429 - 448
  • [37] ASSESSING THE USABILITY OF LIVE SURGICAL FEEDBACK FOR INTRAOPERATIVE TRAINING IMPROVEMENT
    Knudsen, J. Everett
    Ma, Runzhuo
    Wong, Elyssa Y.
    Lin, Lydia
    Dadashian, Eman
    Medina, Luis
    Maas, Marissa
    Lee, Randall
    Goldenberg, Mitchell
    Cen, Steven
    Huang, Xiuzhen
    Hung, Andrew J.
    JOURNAL OF UROLOGY, 2024, 211 (05): : E296 - E296
  • [38] Unsupervised early prediction of human reaching for human–robot collaboration in shared workspaces
    Ruikun Luo
    Rafi Hayne
    Dmitry Berenson
    Autonomous Robots, 2018, 42 : 631 - 648
  • [39] Unsupervised categorization of human motion sequences (vol 17, pg 1057, 2013)
    Wang, Xiaozhe
    Wang, Liang
    Lopes, Leonardo
    INTELLIGENT DATA ANALYSIS, 2014, 18 (01) : 113 - 113
  • [40] Teaming Up with an AI: Exploring Human-AI Collaboration in a Writing Scenario with ChatGPT
    Luther, Teresa
    Kimmerle, Joachim
    Cress, Ulrike
    AI, 2024, 5 (03) : 1357 - 1376