ChatGPT Earns American Board Certification in Hand Surgery

被引:6
|
作者
Ghanem, Diane [1 ]
Nassar, Joseph E.
El Bachour, Joseph [2 ]
Hanna, Tammam [3 ]
机构
[1] Johns Hopkins Univ Hosp, Dept Orthopaed Surg, Baltimore, MD 21287 USA
[2] Amer Univ Beirut, Fac Med, Beirut, Lebanon
[3] Texas Tech Univ, Hlth Sci Ctr, Dept Orthopaed Surg & Rehabil, Lubbock, TX USA
来源
HAND SURGERY & REHABILITATION | 2024年 / 43卷 / 03期
关键词
Artificial intelligence; Language model; ChatGPT; Orthopaedic surgery; Hand surgery; Hand board examination;
D O I
10.1016/j.hansur.2024.101688
中图分类号
R826.8 [整形外科学]; R782.2 [口腔颌面部整形外科学]; R726.2 [小儿整形外科学]; R62 [整形外科学(修复外科学)];
学科分类号
摘要
Purpose: Artificial Intelligence (AI), and specifically ChatGPT, has shown potential in healthcare, yet its performance in specialized medical examinations such as the Orthopaedic Surgery In -Training Examination and European Board Hand Surgery diploma has been inconsistent. This study aims to evaluate the capability of ChatGPT-4 to pass the American Hand Surgery Certifying Examination. Methods: ChatGPT-4 was tested on the 2019 American Society for Surgery of the Hand (ASSH) SelfAssessment Exam. All 200 questions available online (https://onlinecme.assh.org) were retrieved. All media-containing questions were flagged and carefully reviewed. Eight media-containing questions were excluded as they either relied purely on videos or could not be rationalized from the presented information. Descriptive statistics were used to summarize the performance (% correct) of ChatGPT-4. The ASSH report was used to compare ChatGPT-4's performance to that of the 322 physicians who completed the 2019 ASSH self-assessment. Results: ChatGPT-4 answered 192 questions with an overall score of 61.98%. Performance on mediacontaining questions was 55.56%, while on non -media questions it was 65.83%, with no statistical difference in performance based on media inclusion. Despite scoring below the average physician's performance, ChatGPT-4 outperformed in the 'vascular' section with 81.82%. Its performance was lower in the 'bone and joint' (48.54%) and 'neuromuscular' (56.25%) sections. Conclusions: ChatGPT-4 achieved a good overall score of 61.98%. This AI language model demonstrates significant capability in processing and answering specialized medical examination questions, albeit with room for improvement in areas requiring complex clinical judgment and nuanced interpretation. ChatGPT-4's proficiency is influenced by the structure and language of the examination, with no replacement for the depth of trained medical specialists. This study underscores the supportive role of AI in medical education and clinical decision-making while highlighting the current limitations in nuanced fields such as hand surgery. (c) 2024 SFCM. Published by Elsevier Masson SAS. All rights reserved.
引用
收藏
页数:6
相关论文
共 50 条