Evaluating the accuracy of ChatGPT-4 in predicting ASA scores: A prospective multicentric study ChatGPT-4 in ASA score prediction

被引:9
|
作者
Turan, Engin Ihsan [1 ,3 ]
Baydemir, Abdurrahman Engin [2 ]
Ozcan, Funda Gumus
Sahin, Ayca Sultan [1 ]
机构
[1] Istanbul Hlth Sci Univ, Dept Anesthesiol, Kanuni Sultan Suleyman Educ & Training Hosp, Istanbul, Turkiye
[2] Basaksehir Cam ve Sakura City Hosp, Dept Anesthesiol, Istanbul, Turkiye
[3] Istanbul Hlth Sci Univ, Anesthesiol & Reanimat Dept, Dept Gastroenterol, Kanuni Sultan Suleyman Hosp, Atakent Mahallesi Turgut Ozal Bulvari 46-1, TR-34303 Istanbul, Turkiye
关键词
D O I
10.1016/j.jclinane.2024.111475
中图分类号
R614 [麻醉学];
学科分类号
100217 ;
摘要
Background: This study investigates the potential of ChatGPT-4, developed by OpenAI, in enhancing medical decision-making processes, particularly in preoperative assessments using the American Society of Anesthesiologists (ASA) scoring system. The ASA score, a critical tool in evaluating patients' health status and anesthesia risks before surgery, categorizes patients from I to VI based on their overall health and risk factors. Despite its widespread use, determining accurate ASA scores remains a subjective process that may benefit from AI-supported assessments. This research aims to evaluate ChatGPT-4's capability to predict ASA scores accurately compared to expert anesthesiologists' assessments. Methods: In this prospective multicentric study, ethical board approval was obtained, and the study was registered with clinicaltrials.gov (NCT06321445). We included 2851 patients from anesthesiology outpatient clinics, spanning neonates to all age groups and genders, with ASA scores between I-IV. Exclusion criteria were set for ASA V and VI scores, emergency operations, and insufficient information for ASA score determination. Data on patients' demographics, health conditions, and ASA scores by anesthesiologists were collected and anonymized. ChatGPT-4 was then tasked with assigning ASA scores based on the standardized patient data. Results: Our results indicate a high level of concordance between ChatGPT-4 predictions and anesthesiologists' evaluations, with Cohen's kappa analysis showing a kappa value of 0.858 ( p = 0.000). While the model demonstrated over 90% accuracy in predicting ASA scores I to III, it showed a notable variance in ASA IV scores, suggesting a potential limitation in assessing patients with more complex health conditions. Discussion: The findings suggest that ChatGPT-4 can significantly contribute to the medical field by supporting anesthesiologists in preoperative assessments. This study not only demonstrates ChatGPT-4's efficacy in medical data analysis and decision-making but also opens new avenues for AI applications in healthcare, particularly in enhancing patient safety and optimizing surgical outcomes. Further research is needed to refine AI models for complex case assessments and integrate them seamlessly into clinical workflows.
引用
收藏
页数:7
相关论文
共 50 条
  • [1] Letter to the editor, "Evaluating the accuracy of ChatGPT-4 in predicting ASA scores: A prospective multicentric study ChatGPT-4 in ASA score prediction"
    Zhang, Chenghong
    Chen, Xinzhong
    JOURNAL OF CLINICAL ANESTHESIA, 2024, 98
  • [2] Clarifications and reflections on ASA score prediction using ChatGPT-4
    Turan, Engin Ihsan
    Baydemir, Abdurrahman Engin
    Sahin, Ayca Sultan
    Ozcan, Funda Gumus
    JOURNAL OF CLINICAL ANESTHESIA, 2024, 97
  • [3] Ensuring Consistency and Accuracy in Evaluating ChatGPT-4 for Clinical Recommendations
    Zhu, Lingxuan
    Mou, Weiming
    Luo, Peng
    CLINICAL GASTROENTEROLOGY AND HEPATOLOGY, 2025, 23 (01) : 189 - 190
  • [4] Evaluating the impact of ChatGPT-4 on medical abstracts
    Gravel, Jocelyn
    Dion, Chloe
    Kermani, Mandana Fadaei
    Mousseau, Sarah
    Osmanlliu, Esli
    PAEDIATRICS & CHILD HEALTH, 2024, 29 : e45 - e46
  • [5] ChatGPT-4 accuracy for patient education in laryngopharyngeal reflux
    Lechien, Jerome R.
    Carroll, Thomas L.
    Huston, Molly N.
    Naunheim, Matthew R.
    EUROPEAN ARCHIVES OF OTO-RHINO-LARYNGOLOGY, 2024, 281 (05) : 2547 - 2552
  • [6] ChatGPT-4 in the Turing Test
    Echavarria, Ricardo Restrepo
    MINDS AND MACHINES, 2025, 35 (01)
  • [7] Evaluating ChatGPT-4's historical accuracy: a case study on the origins of SWOT analysis
    Puyt, Richard W.
    Madsen, Dag oivind
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2024, 7
  • [8] ChatGPT-4 accuracy for patient education in laryngopharyngeal reflux
    Jerome R. Lechien
    Thomas L. Carroll
    Molly N. Huston
    Matthew R. Naunheim
    European Archives of Oto-Rhino-Laryngology, 2024, 281 : 2547 - 2552
  • [9] Evaluating ChatGPT-4's Diagnostic Accuracy: Impact of Visual Data Integration
    Hirosawa, Takanobu
    Harada, Yukinori
    Tokumasu, Kazuki
    Ito, Takahiro
    Suzuki, Tomoharu
    Shimizu, Taro
    JMIR MEDICAL INFORMATICS, 2024, 12
  • [10] Effectiveness of ChatGPT-4 in predicting the human decision to send patients to the postoperative intensive care unit: a prospective multicentric study
    Turan, Engin I.
    Baydemir, Abdurrahman E.
    Sahin, Ayga S.
    Ozcan, Funda G.
    MINERVA ANESTESIOLOGICA, 2025,