Patient education resources for oral mucositis: a google search and ChatGPT analysis

被引:0
|
作者
Hunter, Nathaniel [1 ]
Allen, David [2 ]
Xiao, Daniel [1 ]
Cox, Madisyn [1 ]
Jain, Kunal [2 ]
机构
[1] Univ Texas Hlth Sci Ctr Houston, McGovern Med Sch, Houston, TX USA
[2] Univ Texas Hlth Sci Ctr Houston, Dept Otorhinolaryngol Head & Neck Surg, Houston, TX 77030 USA
关键词
Oral mucositis; Head and neck cancer; Patient education; Information quality; Google analytics; Artificial intelligence; INTERNET; THERAPY;
D O I
10.1007/s00405-024-08913-5
中图分类号
R76 [耳鼻咽喉科学];
学科分类号
100213 ;
摘要
PurposeOral mucositis affects 90% of patients receiving chemotherapy or radiation for head and neck malignancies. Many patients use the internet to learn about their condition and treatments; however, the quality of online resources is not guaranteed. Our objective was to determine the most common Google searches related to "oral mucositis" and assess the quality and readability of available resources compared to ChatGPT-generated responses.MethodsData related to Google searches for "oral mucositis" were analyzed. People Also Ask (PAA) questions (generated by Google) related to searches for "oral mucositis" were documented. Google resources were rated on quality, understandability, ease of reading, and reading grade level using the Journal of the American Medical Association benchmark criteria, Patient Education Materials Assessment Tool, Flesch Reading Ease Score, and Flesh-Kincaid Grade Level, respectively. ChatGPT-generated responses to the most popular PAA questions were rated using identical metrics.ResultsGoogle search popularity for "oral mucositis" has significantly increased since 2004. 78% of the Google resources answered the associated PAA question, and 6% met the criteria for universal readability. 100% of the ChatGPT-generated responses answered the prompt, and 20% met the criteria for universal readability when asked to write for the appropriate audience.ConclusionMost resources provided by Google do not meet the criteria for universal readability. When prompted specifically, ChatGPT-generated responses were consistently more readable than Google resources. After verification of accuracy by healthcare professionals, ChatGPT could be a reasonable alternative to generate universally readable patient education resources.
引用
收藏
页码:1609 / 1618
页数:10
相关论文
共 50 条
  • [21] Do We Trust ChatGPT as much as Google Search and Wikipedia?
    Jung, Yongnam
    Chen, Cheng
    Jang, Eunchae
    Sundar, S. Shyam
    EXTENDED ABSTRACTS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2024, 2024,
  • [22] Burn Patient Education in the Modern Age: A Comparative Analysis of ChatGPT and Google Performance Answering Common Questions on Burn Injury and Management
    Pandya, Sumaarg
    Bonetti, Mario Alessandri
    Liu, Hilary Y.
    Jeong, Tiffany
    Ziembicki, Jenny A.
    Egro, Francesco M.
    JOURNAL OF BURN CARE & RESEARCH, 2025,
  • [23] Use of generative large language models for patient education on common surgical conditions: a comparative analysis between ChatGPT and Google Gemini
    El Senbawy, Omar Mahmoud
    Patel, Keval Bhavesh
    Wannakuwatte, Randev Ayodhya
    Thota, Akhila N.
    UPDATES IN SURGERY, 2025,
  • [24] Comparative utility analysis of Chordoma search information between ChatGPT vs. Google Web
    Thiru, Shankar S.
    Mesfin, Addisu
    WORLD NEUROSURGERY-X, 2025, 26
  • [25] ChatGPT versus Google Gemini: a comparison to evaluate patient education guide created on common neurological disorders
    Vidith Phillips
    Fadi Kiryakoza
    Shamsul Arefin
    Nishtha Choudhary
    Renat Garifullin
    Discover Artificial Intelligence, 4 (1):
  • [26] Exploring the ability of ChatGPT to create quality patient education resources about kidney transplant
    Tran, Jacqueline Tian
    Burghall, Ashley
    Blydt-Hansen, Tom
    Cammer, Allison
    Goldberg, Aviva
    Hamiwka, Lorraine
    Johnson, Corinne
    Kehler, Conner
    Phan, Veronique
    Rosaasen, Nicola
    Ruhl, Michelle
    Strong, Julie
    Teoh, Chia Wei
    Wichart, Jenny
    Mansell, Holly
    PATIENT EDUCATION AND COUNSELING, 2024, 129
  • [27] Can ChatGPT™ and Google Assistant™ provide education for amblyopia patients?
    Zhang, Joakim Edward
    Wu, Gloria
    Tien, Katherine
    Madhok, Rohan
    Inani, Vrinda
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2023, 64 (08)
  • [28] ChatGPT, Google and healthcare institution sources of postoperative patient instructions
    Daungsupawong, Hinpetch
    Wiwanitkit, Viroj
    BJOG-AN INTERNATIONAL JOURNAL OF OBSTETRICS AND GYNAECOLOGY, 2024,
  • [29] Assessing Readability of Patient Education Materials: A Comparative Study of ASRS Resources and AI-Generated Content by Popular Large Language Models (ChatGPT 4.0 and Google Bard)
    Shi, Michael
    Hanna, Jovana
    Clavell, Christine
    Eid, Kevin
    Eid, Alen
    Ghorayeb, Ghassan
    John Nguyen
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2024, 65 (07)
  • [30] Artificial intelligence chatbots as sources of patient education material for obstructive sleep apnoea: ChatGPT versus Google Bard
    Cheong, Ryan Chin Taw
    Unadkat, Samit
    Mcneillis, Venkata
    Williamson, Andrew
    Joseph, Jonathan
    Randhawa, Premjit
    Andrews, Peter
    Paleri, Vinidh
    EUROPEAN ARCHIVES OF OTO-RHINO-LARYNGOLOGY, 2024, 281 (02) : 985 - 993