Can ChatGPT help patients answer their otolaryngology questions?

被引:12
|
作者
Zalzal, Habib G. [1 ,4 ]
Abraham, Ariel [2 ]
Cheng, Jenhao [3 ]
Shah, Rahul K. [1 ]
机构
[1] Childrens Natl Hosp, Div Otolaryngol Head & Neck Surg, Washington, DC USA
[2] Univ Maryland, Baltimore, MD USA
[3] Childrens Natl Hosp, Qual, Safety, Analyt, Washington, DC USA
[4] Childrens Natl Med Ctr, Div Otolaryngol, 111 Michigan Ave NW, Washington, DC 20010 USA
来源
关键词
artificial intelligence; ChatGPT; large language model; machine learning; OpenAI; patient education;
D O I
10.1002/lio2.1193
中图分类号
R76 [耳鼻咽喉科学];
学科分类号
100213 ;
摘要
Background: Over the past year, the world has been captivated by the potential of artificial intelligence (AI). The appetite for AI in science, specifically healthcare is huge. It is imperative to understand the credibility of large language models in assisting the public in medical queries.Objective: To evaluate the ability of ChatGPT to provide reasonably accurate answers to public queries within the domain of Otolaryngology.Methods: Two board-certified otolaryngologists (HZ, RS) inputted 30 text-based patient queries into the ChatGPT-3.5 model. ChatGPT responses were rated by physicians on a scale (accurate, partially accurate, incorrect), while a similar 3-point scale involving confidence was given to layperson reviewers. Demographic data involving gender and education level was recorded for the public reviewers. Inter-rater agreement percentage was based on binomial distribution for calculating the 95% confidence intervals and performing significance tests. Statistical significance was defined as p < .05 for two-sided tests.Results: In testing patient queries, both Otolaryngology physicians found that ChatGPT answered 98.3% of questions correctly, but only 79.8% (range 51.7%-100%) of patients were confident that the AI model was accurate in its responses (corrected agreement = 0.682; p < .001). Among the layperson responses, the corrected coefficient was of moderate agreement (0.571; p < .001). No correlation was noted among age, gender, or education level for the layperson responses.Conclusion: ChatGPT is highly accurate in responding to questions posed by the public with regards to Otolaryngology from a physician standpoint. Public reviewers were not fully confident in believing the AI model, with subjective concerns related to less trust in AI answers compared to physician explanation. Larger evaluations with a representative public sample and broader medical questions should immediately be conducted by appropriate organizations, governing bodies, and/or governmental agencies to instill public confidence in AI and ChatGPT as a medical resource.Level of Evidence: 4.
引用
收藏
页数:8
相关论文
共 50 条
  • [31] CHOOSING QUESTIONS THAT PEOPLE CAN UNDERSTAND AND ANSWER
    LESSLER, JT
    [J]. MEDICAL CARE, 1995, 33 (04) : AS203 - AS208
  • [32] Can Modern Science Answer the Great Questions?
    Robert E. Criss
    Anne M. Hofmeister
    [J]. Journal of Earth Science, 2022, 33 : 1330 - 1332
  • [33] Large Databases Can Answer Difficult Questions
    Ogilvie, James W.
    [J]. JOURNAL OF BONE AND JOINT SURGERY-AMERICAN VOLUME, 2024, 106 (13):
  • [34] Can Modern Science Answer the Great Questions?
    Robert E.Criss
    Anne M.Hofmeister
    [J]. Journal of Earth Science, 2022, 33 (05) : 1330 - 1332
  • [35] Can Modern Science Answer the Great Questions?
    Criss, Robert E.
    Hofmeister, Anne M.
    [J]. JOURNAL OF EARTH SCIENCE, 2022, 33 (05) : 1330 - 1332
  • [36] Can aerosol technology answer fundamental questions
    Prodi, V.
    Giacomelli, G.Maltoni
    Morigi, M.P.
    Volta, C.
    Zucchini, R.
    Battiston, R.
    Fiandrini, E.
    [J]. Journal of Aerosol Science, 1991, 22 (suppl 1):
  • [37] CAN AEROSOL TECHNOLOGY ANSWER FUNDAMENTAL QUESTIONS
    PRODI, V
    GIACOMELLI, GM
    MORIGI, MP
    VOLTA, C
    ZUCCHINI, R
    BATTISTON, R
    FIANDRINI, E
    [J]. JOURNAL OF AEROSOL SCIENCE, 1991, 22 : S179 - S182
  • [38] Can Modern Science Answer the Great Questions?
    Robert E.Criss
    Anne M.Hofmeister
    [J]. Journal of Earth Science, 2022, (05) - 1332
  • [39] Evaluating ChatGPT ability to answer urinary tract Infection-Related questions
    Cakir, Hakan
    Caglar, Ufuk
    Sekkeli, Sami
    Zerdali, Esra
    Sarilar, Omer
    Yildiz, Oguzhan
    Ozgor, Faruk
    [J]. INFECTIOUS DISEASES NOW, 2024, 54 (04):
  • [40] Assessing ChatGPT Ability to Answer Frequently Asked Questions About Essential Tremor
    Sorrentino, Cristiano
    Canoro, Vincenzo
    Russo, Maria
    Giordano, Caterina
    Barone, Paolo
    Erro, Roberto
    [J]. TREMOR AND OTHER HYPERKINETIC MOVEMENTS, 2024, 14 : 1 - 10