Performance of ChatGPT in Ophthalmic Registration and ClinicalDiagnosis:Cross-Sectional Study

被引:0
|
作者
Ming, Shuai [1 ,2 ]
Guo, Xiaohong
Guo, Qingge [1 ,2 ]
Xie, Kunpeng [1 ]
Chen, Dandan [1 ,2 ]
Lei, Bo [1 ,2 ,3 ]
机构
[1] Henan Eye Hosp, Henan Prov Peoples Hosp, Henan Eye Inst, Dept Ophthalmol, 7 Weiwu Rd, Zhengzhou, Peoples R China
[2] Henan Acad Innovat Med Sci, Eye Inst, Zhengzhou, Peoples R China
[3] Zhengzhou Univ, Peoples Hosp, Henan Clin Res Ctr Ocular Dis, Zhengzhou, Peoples R China
关键词
artificial intelligence; chatbot; ChatGPT; ophthalmic registration; clinical diagnosis; AI; cross-sectional study; eye disease; eyedisorder; ophthalmology; health care; outpatient registration; clinical; decision-making; generative AI; vision impairment; ARTIFICIAL-INTELLIGENCE;
D O I
10.2196/60226
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
Background: Artificial intelligence (AI) chatbots such as ChatGPT are expected to impact vision health care significantly.Their potential to optimize the consultation process and diagnostic capabilities across range of ophthalmic subspecialties haveyet to be fully explored.Objective: This study aims to investigate the performance of AI chatbots in recommending ophthalmic outpatient registrationand diagnosing eye diseases within clinical case profiles.Methods: This cross-sectional study used clinical cases from Chinese Standardized Resident Training-Ophthalmology (2ndEdition). For each case, 2 profiles were created: patient with history (Hx) and patient with history and examination (Hx+Ex).These profiles served as independent queries for GPT-3.5 and GPT-4.0 (accessed from March 5 to 18, 2024). Similarly, 3ophthalmic residents were posed the same profiles in a questionnaire format. The accuracy of recommending ophthalmicsubspecialty registration was primarily evaluated using Hx profiles. The accuracy of the top-ranked diagnosis and the accuracyof the diagnosis within the top 3 suggestions (do-not-miss diagnosis) were assessed using Hx+Ex profiles. The gold standard forjudgment was the published, official diagnosis. Characteristics of incorrect diagnoses by ChatGPT were also analyzed.Results: A total of 208 clinical profiles from 12 ophthalmic subspecialties were analyzed (104 Hx and 104 Hx+Ex profiles).For Hx profiles, GPT-3.5, GPT-4.0, and residents showed comparable accuracy in registration suggestions (66/104, 63.5%;81/104, 77.9%; and 72/104, 69.2%, respectively; P=.07), with ocular trauma, retinal diseases, and strabismus and amblyopiaachieving the top 3 accuracies. For Hx+Ex profiles, both GPT-4.0 and residents demonstrated higher diagnostic accuracy thanGPT-3.5 (62/104, 59.6% and 63/104, 60.6% vs 41/104, 39.4%; P=.003 and P=.001, respectively). Accuracy for do-not-missdiagnoses also improved (79/104, 76% and 68/104, 65.4% vs 51/104, 49%; P<.001 and P=.02, respectively). The highest diagnosticaccuracies were observed in glaucoma; lens diseases; and eyelid, lacrimal, and orbital diseases. GPT-4.0 recorded fewer incorrecttop-3 diagnoses (25/42, 60% vs 53/63, 84%; P=.005) and more partially correct diagnoses (21/42, 50% vs 7/63 11%; P<.001)than GPT-3.5, while GPT-3.5 had more completely incorrect (27/63, 43% vs 7/42, 17%; P=.005) and less precise diagnoses(22/63, 35% vs 5/42, 12%; P=.009).Conclusions: GPT-3.5 and GPT-4.0 showed intermediate performance in recommending ophthalmic subspecialties for registration.While GPT-3.5 underperformed, GPT-4.0 approached and numerically surpassed residents in differential diagnosis. AI chatbotsshow promise in facilitating ophthalmic patient registration. However, their integration into diagnostic decision-making requiresmore validation
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Performance of ChatGPT on the India Undergraduate Community Medicine Examination: Cross-Sectional Study
    Gandhi, Aravind P.
    Joesph, Felista Karen
    Rajagopal, Vineeth
    Aparnavi, P.
    Katkuri, Sushma
    Dayama, Sonal
    Satapathy, Prakasini
    Khatib, Mahalaqua Nazli
    Gaidhane, Shilpa
    Zahiruddin, Quazi Syed
    Behera, Ashish
    JMIR FORMATIVE RESEARCH, 2024, 8
  • [2] Performance of ChatGPT on Nursing Licensure Examinations in the United States and China: Cross-Sectional Study
    Wu, Zelin
    Gan, Wenyi
    Xue, Zhaowen
    Ni, Zhengxin
    Zheng, Xiaofei
    Zhang, Yiyi
    JMIR MEDICAL EDUCATION, 2024, 10
  • [3] The performance of ChatGPT-4.0o in medical imaging evaluation: a cross-sectional study
    Arruzza, Elio Stefan
    Evangelista, Carla Marie
    Chau, Minh
    JOURNAL OF EDUCATIONAL EVALUATION FOR HEALTH PROFESSIONS, 2024, 21
  • [4] The Utility of Keg Registration Laws: A Cross-Sectional Study
    Ringwalt, Chris L.
    Paschall, Mallie J.
    JOURNAL OF ADOLESCENT HEALTH, 2011, 48 (01) : 106 - 108
  • [5] Original Paper Performance of ChatGPT on the Peruvian National Licensing Medical Examination: Cross-Sectional Study
    Flores-Cohaila, Javier A.
    Garcia-Vicente, Abigail
    Vizcarra-Jimenez, Sonia F.
    De la Cruz-Galan, Janith
    Gutierrez-Arratia, Jesus
    Torres, Blanca Geraldine Quiroga
    Taype-Rondan, Alvaro
    JMIR MEDICAL EDUCATION, 2023, 9
  • [6] Genetic counselors' utilization of ChatGPT in professional practice: A cross-sectional study
    Ahimaz, Priyanka
    Bergner, Amanda L.
    Florido, Michelle E.
    Harkavy, Nina
    Bhattacharyya, Sriya
    AMERICAN JOURNAL OF MEDICAL GENETICS PART A, 2024, 194 (04)
  • [7] Attitude and utilization of ChatGPT among registered nurses: A cross-sectional study
    Lin, Hui-Ling
    Liao, Li-Ling
    Wang, Ya-Ni
    Chang, Li-Chun
    INTERNATIONAL NURSING REVIEW, 2024,
  • [8] Performance of ChatGPT, Bard, Claude, and Bing on the Peruvian National Licensing Medical Examination: a cross-sectional study
    Torres-Zegarra, Betzy Clariza
    Rios-Garcia, Wagner
    Nana-Cordova, Alvaro Micael
    Arteaga-Cisneros, Karen Fatima
    Chalco, Xiomara Cristina Benavente
    Ordonez, Marina Atena Bustamante
    Rios, Carlos Jesus Gutierrez
    Godoy, Carlos Alberto Ramos
    Quezada, Kristell Luisa Teresa Panta
    Gutierrez-Arratia, Jesus Daniel
    Flores-Cohaila, Javier Alejandro
    JOURNAL OF EDUCATIONAL EVALUATION FOR HEALTH PROFESSIONS, 2023, 20
  • [9] Ophthalmic disqualification from the military services: Multicentric cross-sectional study
    Manchart, T.
    Froussart-Maille, F.
    JOURNAL FRANCAIS D OPHTALMOLOGIE, 2024, 47 (06):
  • [10] Medical Student Experiences and Perceptions of ChatGPT and Artificial Intelligence: Cross-Sectional Study
    Alkhaaldi, Saif M., I
    Kassab, Carl H.
    Dimassi, Zakia
    Alsoud, Leen Oyoun
    Al Fahim, Maha
    Al Hageh, Cynthia
    Ibrahim, Halah
    JMIR MEDICAL EDUCATION, 2023, 9