Patient education resources for oral mucositis: a google search and ChatGPT analysis

被引:0
|
作者
Hunter, Nathaniel [1 ]
Allen, David [2 ]
Xiao, Daniel [1 ]
Cox, Madisyn [1 ]
Jain, Kunal [2 ]
机构
[1] Univ Texas Hlth Sci Ctr Houston, McGovern Med Sch, Houston, TX USA
[2] Univ Texas Hlth Sci Ctr Houston, Dept Otorhinolaryngol Head & Neck Surg, Houston, TX 77030 USA
关键词
Oral mucositis; Head and neck cancer; Patient education; Information quality; Google analytics; Artificial intelligence; INTERNET; THERAPY;
D O I
10.1007/s00405-024-08913-5
中图分类号
R76 [耳鼻咽喉科学];
学科分类号
100213 ;
摘要
PurposeOral mucositis affects 90% of patients receiving chemotherapy or radiation for head and neck malignancies. Many patients use the internet to learn about their condition and treatments; however, the quality of online resources is not guaranteed. Our objective was to determine the most common Google searches related to "oral mucositis" and assess the quality and readability of available resources compared to ChatGPT-generated responses.MethodsData related to Google searches for "oral mucositis" were analyzed. People Also Ask (PAA) questions (generated by Google) related to searches for "oral mucositis" were documented. Google resources were rated on quality, understandability, ease of reading, and reading grade level using the Journal of the American Medical Association benchmark criteria, Patient Education Materials Assessment Tool, Flesch Reading Ease Score, and Flesh-Kincaid Grade Level, respectively. ChatGPT-generated responses to the most popular PAA questions were rated using identical metrics.ResultsGoogle search popularity for "oral mucositis" has significantly increased since 2004. 78% of the Google resources answered the associated PAA question, and 6% met the criteria for universal readability. 100% of the ChatGPT-generated responses answered the prompt, and 20% met the criteria for universal readability when asked to write for the appropriate audience.ConclusionMost resources provided by Google do not meet the criteria for universal readability. When prompted specifically, ChatGPT-generated responses were consistently more readable than Google resources. After verification of accuracy by healthcare professionals, ChatGPT could be a reasonable alternative to generate universally readable patient education resources.
引用
收藏
页码:1609 / 1618
页数:10
相关论文
共 50 条
  • [31] Artificial intelligence chatbots as sources of patient education material for obstructive sleep apnoea: ChatGPT versus Google Bard
    Ryan Chin Taw Cheong
    Samit Unadkat
    Venkata Mcneillis
    Andrew Williamson
    Jonathan Joseph
    Premjit Randhawa
    Peter Andrews
    Vinidh Paleri
    European Archives of Oto-Rhino-Laryngology, 2024, 281 : 985 - 993
  • [32] The Search for an Effective Therapy and Pain Relief for Oral Mucositis
    Elad, Sharon
    Yarom, Noam
    JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, 2019, 321 (15): : 1459 - 1461
  • [33] Severe oral mucositis in a patient with HIV infection
    Ehlert, N.
    Seilmaier, M.
    Guggemos, W.
    Loescher, T.
    Meurer, A.
    Wendtner, C. M.
    DEUTSCHE MEDIZINISCHE WOCHENSCHRIFT, 2013, 138 (31-32) : 1601 - 1605
  • [34] Anaerobic bacteremia in a neutropenic patient with oral mucositis
    Vidal, AM
    Sarria, JC
    Kimbrough, RC
    Keung, YK
    AMERICAN JOURNAL OF THE MEDICAL SCIENCES, 2000, 319 (03): : 189 - 190
  • [35] ChatGPT for patient education: an evolving investigation
    Campbell, Daniel J.
    Estephan, Leonard E.
    JOURNAL OF CLINICAL SLEEP MEDICINE, 2023, 19 (12): : 2135 - 2136
  • [36] Evaluation of ChatGPT for Use in Patient Education
    Rodriguez, Aurora
    Lam, Matthew
    Kanda, Leslie
    Kueny, Laura
    Sipperley, Jack
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2024, 65 (07)
  • [37] Comparing the Ability of Google and ChatGPT to Accurately Respond to Oculoplastics-Related Patient Questions and Generate Customized Oculoplastics Patient Education Materials
    Cohen, Samuel A.
    Yadlapalli, Nikhita
    Tijerina, Jonathan
    Alabiad, Chrisfouad R.
    Chang, Jessica R.
    Kinde, Benyam
    Mahoney, Nicholas R.
    Roelofs, Kelsey A.
    Woodward, Julie A.
    Kossler, Andrea L.
    CLINICAL OPHTHALMOLOGY, 2024, 18 : 2647 - 2655
  • [38] A Comparative Assessment of ChatGPT vs Google Translate for the Translation of Patient Instructions
    Rao, Pavithra
    JOURNAL OF THE AMERICAN COLLEGE OF SURGEONS, 2024, 239 (05) : S562 - S563
  • [39] A systematic assessment of Google search queries and readability of online gynecologic oncology patient education materials
    Martin, A.
    Stewart, R.
    Gaskins, J.
    Medlin, E.
    GYNECOLOGIC ONCOLOGY, 2017, 147 (01) : 206 - 206
  • [40] A Systematic Assessment of Google Search Queries and Readability of Online Gynecologic Oncology Patient Education Materials
    Martin, Alexandra
    Stewart, J. Ryan
    Gaskins, Jeremy
    Medlin, Erin
    JOURNAL OF CANCER EDUCATION, 2019, 34 (03) : 435 - 440