Patient education resources for oral mucositis: a google search and ChatGPT analysis

被引:0
|
作者
Hunter, Nathaniel [1 ]
Allen, David [2 ]
Xiao, Daniel [1 ]
Cox, Madisyn [1 ]
Jain, Kunal [2 ]
机构
[1] Univ Texas Hlth Sci Ctr Houston, McGovern Med Sch, Houston, TX USA
[2] Univ Texas Hlth Sci Ctr Houston, Dept Otorhinolaryngol Head & Neck Surg, Houston, TX 77030 USA
关键词
Oral mucositis; Head and neck cancer; Patient education; Information quality; Google analytics; Artificial intelligence; INTERNET; THERAPY;
D O I
10.1007/s00405-024-08913-5
中图分类号
R76 [耳鼻咽喉科学];
学科分类号
100213 ;
摘要
PurposeOral mucositis affects 90% of patients receiving chemotherapy or radiation for head and neck malignancies. Many patients use the internet to learn about their condition and treatments; however, the quality of online resources is not guaranteed. Our objective was to determine the most common Google searches related to "oral mucositis" and assess the quality and readability of available resources compared to ChatGPT-generated responses.MethodsData related to Google searches for "oral mucositis" were analyzed. People Also Ask (PAA) questions (generated by Google) related to searches for "oral mucositis" were documented. Google resources were rated on quality, understandability, ease of reading, and reading grade level using the Journal of the American Medical Association benchmark criteria, Patient Education Materials Assessment Tool, Flesch Reading Ease Score, and Flesh-Kincaid Grade Level, respectively. ChatGPT-generated responses to the most popular PAA questions were rated using identical metrics.ResultsGoogle search popularity for "oral mucositis" has significantly increased since 2004. 78% of the Google resources answered the associated PAA question, and 6% met the criteria for universal readability. 100% of the ChatGPT-generated responses answered the prompt, and 20% met the criteria for universal readability when asked to write for the appropriate audience.ConclusionMost resources provided by Google do not meet the criteria for universal readability. When prompted specifically, ChatGPT-generated responses were consistently more readable than Google resources. After verification of accuracy by healthcare professionals, ChatGPT could be a reasonable alternative to generate universally readable patient education resources.
引用
收藏
页码:1609 / 1618
页数:10
相关论文
共 50 条
  • [41] A Systematic Assessment of Google Search Queries and Readability of Online Gynecologic Oncology Patient Education Materials
    Alexandra Martin
    J. Ryan Stewart
    Jeremy Gaskins
    Erin Medlin
    Journal of Cancer Education, 2019, 34 : 435 - 440
  • [42] Google Bard and ChatGPT in Orthopedics: Which Is the Better Doctor in Sports Medicine and Pediatric Orthopedics? The Role of AI in Patient Education
    Giorgino, Riccardo
    Alessandri-Bonetti, Mario
    Del Re, Matteo
    Verdoni, Fabio
    Peretti, Giuseppe M.
    Mangiavini, Laura
    DIAGNOSTICS, 2024, 14 (12)
  • [43] Artificial intelligence chatbots as sources of patient education material for cataract surgery: ChatGPT-4 versus Google Bard
    Azzopardi, Matthew
    Ng, Benjamin
    Logeswaran, Abison
    Loizou, Constantinos
    Cheong, Ryan Chin Taw
    Gireesh, Prasanth
    Ting, Darren Shu Jeng
    Chong, Yu Jeat
    BMJ OPEN OPHTHALMOLOGY, 2024, 9 (01):
  • [44] Evaluating the role of AI chatbots in patient education for abdominal aortic aneurysms: a comparison of ChatGPT and conventional resources
    Collin, Harry
    Tong, Chelsea
    Srinivas, Abhishekh
    Pegler, Angus
    Allan, Philip
    Hagley, Daniel
    ANZ JOURNAL OF SURGERY, 2025, 95 (04) : 784 - 788
  • [45] Patient involvement in prevention of oral mucositis: What is the evidence?
    Wray, Tara
    ONCOLOGY NURSING FORUM, 2008, 35 (03) : 536 - 536
  • [46] ChatGPT, obstructive sleep apnea, and patient education
    Kleebayoon, Amnuay
    Wiwanitkit, Viroj
    JOURNAL OF CLINICAL SLEEP MEDICINE, 2023, 19 (12): : 2133 - 2133
  • [47] Using Google web search to analyze and evaluate the application of ChatGPT in femoroacetabular impingement syndrome
    Chen, Yifan
    Zhang, Shengqun
    Tang, Ning
    George, Daniel M.
    Huang, Tianlong
    Tang, Jinping
    FRONTIERS IN PUBLIC HEALTH, 2024, 12
  • [48] Accuracy of ChatGPT responses on tracheotomy for patient education
    Khaldi, Amina
    Machayekhi, Shahram
    Salvagno, Michele
    Maniaci, Antonino
    Vaira, Luigi A.
    La Via, Luigi
    Taccone, Fabio S.
    Lechien, Jerome R.
    EUROPEAN ARCHIVES OF OTO-RHINO-LARYNGOLOGY, 2024, : 6167 - 6172
  • [49] Head-to-Head Comparison of ChatGPT Versus Google Search for Medical Knowledge Acquisition
    Ayoub, Noel F.
    Lee, Yu-Jin
    Grimm, David
    Divi, Vasu
    OTOLARYNGOLOGY-HEAD AND NECK SURGERY, 2024, 170 (06) : 1484 - 1491
  • [50] The Significance of Artificial Intelligence Platforms in Anatomy Education: An Experience With ChatGPT and Google Bard
    Ilgaz, Hasan B.
    Celik, Zehra
    CUREUS JOURNAL OF MEDICAL SCIENCE, 2023, 15 (09)