Evaluating the accuracy of ChatGPT-4 in predicting ASA scores: A prospective multicentric study ChatGPT-4 in ASA score prediction

被引:9
|
作者
Turan, Engin Ihsan [1 ,3 ]
Baydemir, Abdurrahman Engin [2 ]
Ozcan, Funda Gumus
Sahin, Ayca Sultan [1 ]
机构
[1] Istanbul Hlth Sci Univ, Dept Anesthesiol, Kanuni Sultan Suleyman Educ & Training Hosp, Istanbul, Turkiye
[2] Basaksehir Cam ve Sakura City Hosp, Dept Anesthesiol, Istanbul, Turkiye
[3] Istanbul Hlth Sci Univ, Anesthesiol & Reanimat Dept, Dept Gastroenterol, Kanuni Sultan Suleyman Hosp, Atakent Mahallesi Turgut Ozal Bulvari 46-1, TR-34303 Istanbul, Turkiye
关键词
D O I
10.1016/j.jclinane.2024.111475
中图分类号
R614 [麻醉学];
学科分类号
100217 ;
摘要
Background: This study investigates the potential of ChatGPT-4, developed by OpenAI, in enhancing medical decision-making processes, particularly in preoperative assessments using the American Society of Anesthesiologists (ASA) scoring system. The ASA score, a critical tool in evaluating patients' health status and anesthesia risks before surgery, categorizes patients from I to VI based on their overall health and risk factors. Despite its widespread use, determining accurate ASA scores remains a subjective process that may benefit from AI-supported assessments. This research aims to evaluate ChatGPT-4's capability to predict ASA scores accurately compared to expert anesthesiologists' assessments. Methods: In this prospective multicentric study, ethical board approval was obtained, and the study was registered with clinicaltrials.gov (NCT06321445). We included 2851 patients from anesthesiology outpatient clinics, spanning neonates to all age groups and genders, with ASA scores between I-IV. Exclusion criteria were set for ASA V and VI scores, emergency operations, and insufficient information for ASA score determination. Data on patients' demographics, health conditions, and ASA scores by anesthesiologists were collected and anonymized. ChatGPT-4 was then tasked with assigning ASA scores based on the standardized patient data. Results: Our results indicate a high level of concordance between ChatGPT-4 predictions and anesthesiologists' evaluations, with Cohen's kappa analysis showing a kappa value of 0.858 ( p = 0.000). While the model demonstrated over 90% accuracy in predicting ASA scores I to III, it showed a notable variance in ASA IV scores, suggesting a potential limitation in assessing patients with more complex health conditions. Discussion: The findings suggest that ChatGPT-4 can significantly contribute to the medical field by supporting anesthesiologists in preoperative assessments. This study not only demonstrates ChatGPT-4's efficacy in medical data analysis and decision-making but also opens new avenues for AI applications in healthcare, particularly in enhancing patient safety and optimizing surgical outcomes. Further research is needed to refine AI models for complex case assessments and integrate them seamlessly into clinical workflows.
引用
收藏
页数:7
相关论文
共 50 条
  • [31] Diagnostic accuracy of a large language model in rheumatology: comparison of physician and ChatGPT-4
    Martin Krusche
    Johnna Callhoff
    Johannes Knitza
    Nikolas Ruffer
    Rheumatology International, 2024, 44 : 303 - 306
  • [32] ChatGPT-4, Medical Education, and Clinical Exposure Challenges
    Kleebayoon, Amnuay
    Wiwanitkit, Viroj
    INDIAN JOURNAL OF ORTHOPAEDICS, 2023, 57 (11) : 1912 - 1912
  • [33] Evaluating the Feasibility of ChatGPT-4 as a Knowledge Resource in Bariatric Surgery: A Preliminary Assessment
    Leng, Yu
    Yang, Yaoxin
    Liu, Jin
    Jiang, Jingyao
    Zhou, Cheng
    OBESITY SURGERY, 2025, 35 (02) : 645 - 650
  • [34] ChatGPT-4 performance in rhinology: A clinical case series
    Radulesco, Thomas
    Saibene, Alberto Maria
    Michel, Justin
    Vaira, Luigi Angelo
    Lechien, Jerome R.
    INTERNATIONAL FORUM OF ALLERGY & RHINOLOGY, 2024, 14 (06) : 1123 - 1130
  • [35] ChatGPT-4, Medical Education, and Clinical Exposure Challenges
    Amnuay Kleebayoon
    Viroj Wiwanitkit
    Indian Journal of Orthopaedics, 2023, 57 : 1912 - 1912
  • [36] A comparison of cover letters written by ChatGPT-4 or humans
    Deveci, Can Deniz
    Baker, Jason Joe
    Sikander, Binyamin
    Rosenberg, Jacob
    DANISH MEDICAL JOURNAL, 2023, 70 (12):
  • [37] Correspondence on "cover letters written by ChatGPT-4 or humans"
    Daungsupawong, Hinpetch
    Wiwanitkit, Viroj
    DANISH MEDICAL JOURNAL, 2024, 71 (01):
  • [38] Letter to the editor on: "AI versus MD: Evaluating the surgical decision-making accuracy of ChatGPT-4"
    Daungsupawong, Hinpetch
    Wiwanitkit, Viroj
    SURGERY, 2024, 176 (06) : 1782 - 1782
  • [39] Diagnostic accuracy of a large language model in rheumatology: comparison of physician and ChatGPT-4
    Krusche, Martin
    Callhoff, Johnna
    Knitza, Johannes
    Ruffer, Nikolas
    RHEUMATOLOGY INTERNATIONAL, 2024, 44 (02) : 303 - 306
  • [40] Evaluating ChatGPT-4's performance as a digital health advisor for otosclerosis surgery
    Sahin, Samil
    Erkmen, Burak
    Duymaz, Yasar Kemal
    Bayram, Furkan
    Tekin, Ahmet Mahmut
    Topsakal, Vedat
    FRONTIERS IN SURGERY, 2024, 11