Can ChatGPT help patients answer their otolaryngology questions?

被引:12
|
作者
Zalzal, Habib G. [1 ,4 ]
Abraham, Ariel [2 ]
Cheng, Jenhao [3 ]
Shah, Rahul K. [1 ]
机构
[1] Childrens Natl Hosp, Div Otolaryngol Head & Neck Surg, Washington, DC USA
[2] Univ Maryland, Baltimore, MD USA
[3] Childrens Natl Hosp, Qual, Safety, Analyt, Washington, DC USA
[4] Childrens Natl Med Ctr, Div Otolaryngol, 111 Michigan Ave NW, Washington, DC 20010 USA
来源
关键词
artificial intelligence; ChatGPT; large language model; machine learning; OpenAI; patient education;
D O I
10.1002/lio2.1193
中图分类号
R76 [耳鼻咽喉科学];
学科分类号
100213 ;
摘要
Background: Over the past year, the world has been captivated by the potential of artificial intelligence (AI). The appetite for AI in science, specifically healthcare is huge. It is imperative to understand the credibility of large language models in assisting the public in medical queries.Objective: To evaluate the ability of ChatGPT to provide reasonably accurate answers to public queries within the domain of Otolaryngology.Methods: Two board-certified otolaryngologists (HZ, RS) inputted 30 text-based patient queries into the ChatGPT-3.5 model. ChatGPT responses were rated by physicians on a scale (accurate, partially accurate, incorrect), while a similar 3-point scale involving confidence was given to layperson reviewers. Demographic data involving gender and education level was recorded for the public reviewers. Inter-rater agreement percentage was based on binomial distribution for calculating the 95% confidence intervals and performing significance tests. Statistical significance was defined as p < .05 for two-sided tests.Results: In testing patient queries, both Otolaryngology physicians found that ChatGPT answered 98.3% of questions correctly, but only 79.8% (range 51.7%-100%) of patients were confident that the AI model was accurate in its responses (corrected agreement = 0.682; p < .001). Among the layperson responses, the corrected coefficient was of moderate agreement (0.571; p < .001). No correlation was noted among age, gender, or education level for the layperson responses.Conclusion: ChatGPT is highly accurate in responding to questions posed by the public with regards to Otolaryngology from a physician standpoint. Public reviewers were not fully confident in believing the AI model, with subjective concerns related to less trust in AI answers compared to physician explanation. Larger evaluations with a representative public sample and broader medical questions should immediately be conducted by appropriate organizations, governing bodies, and/or governmental agencies to instill public confidence in AI and ChatGPT as a medical resource.Level of Evidence: 4.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Does ChatGPT Answer Otolaryngology Questions Accurately?
    Maksimoski, Matthew
    Noble, Anisha Rhea
    Smith, David F.
    [J]. LARYNGOSCOPE, 2024, 134 (09): : 4011 - 4015
  • [2] Can Patients Rely on ChatGPT to Answer Hand Pathology-Related Medical Questions?
    Jagiella-Lodise, Olivia
    Suh, Nina
    Zelenski, Nicole A.
    [J]. HAND-AMERICAN ASSOCIATION FOR HAND SURGERY, 2024,
  • [3] Can ChatGPT Answer Patient Questions Regarding Total Knee Arthroplasty?
    Mika, Aleksander P.
    Mulvey, Hillary E.
    Engstrom, Stephen M.
    Polkowski, Gregory G.
    Martin, J. Ryan
    Wilson, Jacob M.
    [J]. JOURNAL OF KNEE SURGERY, 2024, 37 (09) : 664 - 673
  • [4] BOSF - Can You Help Us Answer the Important Questions?
    Ryan, Fiona
    [J]. JOURNAL OF ORTHODONTICS, 2024, 51 (03) : 226 - 227
  • [5] Assessing ChatGPT's ability to answer questions pertaining to erectile dysfunction: can our patients trust it?
    Razdan, Shirin
    Siegal, Alexandra R.
    Brewer, Yukiko
    Sljivich, Michaela
    Valenzuela, Robert J.
    [J]. INTERNATIONAL JOURNAL OF IMPOTENCE RESEARCH, 2023,
  • [6] Assessing ChatGPT's Responses to Otolaryngology Patient Questions
    Carnino, Jonathan M.
    Pellegrini, William R.
    Willis, Megan
    Cohen, Michael B.
    Paz-Lansberg, Marianella
    Davis, Elizabeth M.
    Grillone, Gregory A.
    Levi, Jessica R.
    [J]. ANNALS OF OTOLOGY RHINOLOGY AND LARYNGOLOGY, 2024, 133 (07): : 658 - 664
  • [7] Can an electronic database help busy physicians answer clinical questions?
    Blackman, D
    Cifu, A
    Levinson, W
    [J]. JOURNAL OF GENERAL INTERNAL MEDICINE, 2002, 17 : 220 - 220
  • [8] Can ChatGPT help patients understand their andrological diseases?
    Ergin, Ismail Emre
    Sanci, Adem
    [J]. REVISTA INTERNACIONAL DE ANDROLOGIA, 2024, 22 (02): : 14 - 20
  • [9] Can nonprofit management help answer public management's "big questions"?
    Brooks, AC
    [J]. PUBLIC ADMINISTRATION REVIEW, 2002, 62 (03) : 259 - 266
  • [10] Response to commentary on: assessing ChatGPT's ability to answer questions pertaining to erectile dysfunction: can our patients trust it?
    Razdan, Shirin
    Valenzuela, Robert J.
    [J]. INTERNATIONAL JOURNAL OF IMPOTENCE RESEARCH, 2024,