Performance of ChatGPT and GPT-4 on Neurosurgery Written Board Examinations

被引:53
|
作者
Ali, Rohaid [1 ,6 ]
Tang, Oliver Y. [1 ]
Connolly, Ian D. [2 ]
Sullivan, Patricia L. Zadnik [1 ]
Shin, John H. [3 ]
Fridley, Jared S. [1 ]
Asaad, Wael F. [1 ,3 ,4 ,5 ]
Cielo, Deus [1 ]
Oyelese, Adetokunbo A. [1 ]
Doberstein, Curtis E. [1 ]
Gokaslan, Ziya L. [1 ]
Telfeian, Albert E. [1 ]
机构
[1] USA, Blountstown, FL USA
[2] Massachusetts Gen Hosp, Dept Neurosurg, Boston, MA USA
[3] Rhode Isl Hosp, Norman Prince Neurosci Inst, Dept Neurosci, Providence, RI 02903 USA
[4] Brown Univ, Dept Neurosci, Providence, RI USA
[5] Brown Univ, Carney Inst Brain Sci, Dept Neurosci, Providence, RI USA
[6] Rhode Isl Hosp, Dept Neurosurg, LPG Neurosurg, 593 Eddy St,APC6, Providence, RI 02903 USA
关键词
Neurosurgery; Medical education; Surgical education; Residency education; Artificial intelligence; Large language models; ChatGPT; GPT-4;
D O I
10.1227/neu.0000000000002632
中图分类号
R74 [神经病学与精神病学];
学科分类号
摘要
BACKGROUND AND OBJECTIVES: Interest surrounding generative large language models (LLMs) has rapidly grown. Although ChatGPT (GPT-3.5), a general LLM, has shown near-passing performance on medical student board examinations, the performance of ChatGPT or its successor GPT-4 on specialized examinations and the factors affecting accuracy remain unclear. This study aims to assess the performance of ChatGPT and GPT-4 on a 500-question mock neurosurgical written board examination.METHODS: The Self-Assessment Neurosurgery Examinations (SANS) American Board of Neurological Surgery Self-Assessment Examination 1 was used to evaluate ChatGPT and GPT-4. Questions were in single best answer, multiple-choice format. chi 2, Fisher exact, and univariable logistic regression tests were used to assess performance differences in relation to question characteristics.RESULTS: ChatGPT (GPT-3.5) and GPT-4 achieved scores of 73.4% (95% CI: 69.3%-77.2%) and 83.4% (95% CI: 79.8%-86.5%), respectively, relative to the user average of 72.8% (95% CI: 68.6%-76.6%). Both LLMs exceeded last year's passing threshold of 69%. Although scores between ChatGPT and question bank users were equivalent (P = .963), GPT-4 outperformed both (both P < .001). GPT-4 answered every question answered correctly by ChatGPT and 37.6% (50/133) of remaining incorrect questions correctly. Among 12 question categories, GPT-4 significantly outperformed users in each but performed comparably with ChatGPT in 3 (functional, other general, and spine) and outperformed both users and ChatGPT for tumor questions. Increased word count (odds ratio = 0.89 of answering a question correctly per +10 words) and higher-order problem-solving (odds ratio = 0.40, P = .009) were associated with lower accuracy for ChatGPT, but not for GPT-4 (both P > .005). Multimodal input was not available at the time of this study; hence, on questions with image content, ChatGPT and GPT-4 answered 49.5% and 56.8% of questions correctly based on contextual context clues alone.CONCLUSION: LLMs achieved passing scores on a mock 500-question neurosurgical written board examination, with GPT-4 significantly outperforming ChatGPT.
引用
收藏
页码:1353 / 1365
页数:13
相关论文
共 50 条
  • [1] Letter: Performance of ChatGPT and GPT-4 on Neurosurgery Written Board Examinations
    Zhu, Huali
    Kong, Yi
    NEUROSURGERY, 2024, 95 (03) : e80 - e80
  • [2] GPT-4 Artificial Intelligence Model Outperforms ChatGPT, Medical Students, and Neurosurgery Residents on Neurosurgery Written Board-Like Questions
    Guerra, Gage A.
    Hofmann, Hayden
    Sobhani, Sina
    Hofmann, Grady
    Gomez, David
    Soroudi, Daniel
    Hopkins, Benjamin S.
    Dallas, Jonathan
    Pangal, Dhiraj J.
    Cheok, Stephanie
    Nguyen, Vincent N.
    Mack, William J.
    Zada, Gabriel
    WORLD NEUROSURGERY, 2023, 179 : E160 - E165
  • [3] Letter to the Editor Regarding: "GPT-4 Artificial Intelligence Model Outperforms ChatGPT, Medical Students, and Neurosurgery Residents on Neurosurgery Written Board-Like Questions"
    Liu, Ming
    Huang, Fang
    Zhang, Chenghong
    WORLD NEUROSURGERY, 2024, 184 : 351 - 351
  • [4] ChatGPT, GPT-4, and Bard and official board examination: comment
    Daungsupawong, Hinpetch
    Wiwanitkit, Viroj
    JAPANESE JOURNAL OF RADIOLOGY, 2024, 42 (02) : 212 - 213
  • [5] ChatGPT, GPT-4, and Bard and official board examination: comment
    Hinpetch Daungsupawong
    Viroj Wiwanitkit
    Japanese Journal of Radiology, 2024, 42 : 212 - 213
  • [6] Performance evaluation of ChatGPT, GPT-4, and Bard on the official board examination of the Japan Radiology Society
    Toyama, Yoshitaka
    Harigai, Ayaka
    Abe, Mirei
    Nagano, Mitsutoshi
    Kawabata, Masahiro
    Seki, Yasuhiro
    Takase, Kei
    JAPANESE JOURNAL OF RADIOLOGY, 2023, 42 (2) : 201 - 207
  • [7] Performance evaluation of ChatGPT, GPT-4, and Bard on the official board examination of the Japan Radiology Society
    Yoshitaka Toyama
    Ayaka Harigai
    Mirei Abe
    Mitsutoshi Nagano
    Masahiro Kawabata
    Yasuhiro Seki
    Kei Takase
    Japanese Journal of Radiology, 2024, 42 : 201 - 207
  • [8] ChatGPT/GPT-4 and Spinal Surgeons
    Amnuay Kleebayoon
    Viroj Wiwanitkit
    Annals of Biomedical Engineering, 2023, 51 : 1657 - 1657
  • [9] ChatGPT/GPT-4 and Spinal Surgeons
    Kleebayoon, Amnuay
    Wiwanitkit, Viroj
    ANNALS OF BIOMEDICAL ENGINEERING, 2023, 51 (08) : 1657 - 1657
  • [10] Comparing ChatGPT and GPT-4 performance in USMLE soft skill assessments
    Brin, Dana
    Sorin, Vera
    Vaid, Akhil
    Soroush, Ali
    Glicksberg, Benjamin S.
    Charney, Alexander W.
    Nadkarni, Girish
    Klang, Eyal
    SCIENTIFIC REPORTS, 2023, 13 (01)