Enhancing Readability of Online Patient-Facing Content: The Role of AI Chatbots in Improving Cancer Information Accessibility

被引:10
|
作者
Abreu, Andres A. [1 ]
Murimwa, Gilbert Z. [1 ]
Farah, Emile [1 ]
Stewart, James W. [2 ]
Zhang, Lucia [1 ]
Rodriguez, Jonathan [1 ]
Sweetenham, John [1 ]
Zeh, Herbert J. [1 ]
Wang, Sam C. [1 ]
Polanco, Patricio M. [1 ]
机构
[1] UT Southwestern Med Ctr, Dept Surg, Div Surg Oncol, 5323 Harry Hines Blvd, Dallas, TX 75390 USA
[2] Yale Sch Med, Dept Surg, New Haven, CT USA
关键词
HEALTH LITERACY; ASSOCIATION; INTERNET; WEB;
D O I
10.6004/jnccn.2023.7334
中图分类号
R73 [肿瘤学];
学科分类号
100214 ;
摘要
Background: Internet-based health education is increasingly vital in patient care. However, the readability of online information often exceeds the average reading level of the US population, limiting accessibility and comprehension. This study investigates the use of chatbot artificial intelligence to improve the readability of cancer-related patient-facing content. Methods: We used ChatGPT 4.0 to rewrite content about breast, colon, lung, prostate, and pancreas cancer across 34 websites associated with NCCN Member Institutions. Readability was analyzed using Fry Readability Score, Flesch-Kincaid Grade Level, Gunning Fog Index, and Simple Measure of Gobbledygook. The primary outcome was the mean readability score for the original and artificial intelligence (AI)-generated content. As secondary outcomes, we assessed the accuracy, similarity, and quality using F1 scores, cosine similarity scores, and section 2 of the DISCERN instrument, respectively. Results: The mean readability level across the 34 websites was equivalent to a university freshman level (grade 13 +/- 1.5). However, after ChatGPT's intervention, the AI-generated outputs had a mean readability score equivalent to a high school freshman education level (grade 9 +/- 0.8). The overall F1 score for the rewritten content was 0.87, the precision score was 0.934, and the recall score was 0.814. Compared with their original counterparts, the AI-rewritten content had a cosine similarity score of 0.915 (95% CI, 0.908-0.922). The improved readability was attributed to simpler words and shorter sentences. The mean DISCERN score of the random sample of AI-generated content was equivalent to "good" (28.5 +/- 5), with no significant differences compared with their original counterparts. Conclusions: Our study demonstrates the potential of AI chatbots to improve the readability of patient-facing content while maintaining content quality. The decrease in requisite literacy after AI revision emphasizes the potential of this technology to reduce health care disparities caused by a mismatch between educational resources available to a patient and their health literacy.
引用
收藏
页数:8
相关论文
共 23 条
  • [21] The quality, suitability, content and readability of online health-related information regarding sexual dysfunction after rectal cancer surgery
    Brissette, Vincent
    Alnaki, Ali
    Garfinkle, Richard
    Lloyd, Marshall
    Demian, Marie
    Vasilevsky, Carol-Ann
    Morin, Nancy
    Boutros, Marylise
    COLORECTAL DISEASE, 2021, 23 (02) : 376 - 383
  • [22] AVAILABILITY OF ONLINE HEALTH INFORMATION FOR OVARIAN CANCER SYMPTOMS, PATIENT-PROVIDER COMMUNICATION, AND ONLINE SUPPORT GROUPS: A CONTENT ANALYSIS OF HEALTH INFORMATION ON OVARIAN CANCER WEBSITES
    Tagai, Erin K.
    ANNALS OF BEHAVIORAL MEDICINE, 2013, 45 : S6 - S6
  • [23] So Difficult to Understand : Readability Index Analysis of Online Patient Information on Lymphoma from NCI-Designated Cancer Center.
    Kaur, Supreet
    Kumar, Abhishek
    Mehta, Dhruv
    Maroules, Michael
    BLOOD, 2016, 128 (22)