Between Always and Never: Evaluating Uncertainty in Radiology Reports Using Natural Language Processing

被引:16
|
作者
Callen, Andrew L. [1 ]
Dupont, Sara M. [2 ]
Price, Adi [3 ]
Laguna, Ben [3 ]
McCoy, David [3 ]
Do, Bao [4 ]
Talbott, Jason [3 ]
Kohli, Marc [3 ]
Narvid, Jared [3 ]
机构
[1] Univ Colorado, Dept Radiol, Anschutz Med Campus, Denver, CO 80045 USA
[2] Sublte Med Inc, Menlo Pk, CA USA
[3] Univ Calif San Francisco, Dept Radiol & Biomed Imaging, San Francisco, CA 94143 USA
[4] Stanford Univ, Med Ctr, Dept Radiol, Stanford, CA 94305 USA
基金
美国国家卫生研究院;
关键词
Diagnostic uncertainty; Natural language processing; MALPRACTICE; INFORMATION; ACCURACY; MEDICINE; ERRORS;
D O I
10.1007/s10278-020-00379-1
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
The ideal radiology report reduces diagnostic uncertainty, while avoiding ambiguity whenever possible. The purpose of this study was to characterize the use of uncertainty terms in radiology reports at a single institution and compare the use of these terms across imaging modalities, anatomic sections, patient characteristics, and radiologist characteristics. We hypothesized that there would be variability among radiologists and between subspecialities within radiology regarding the use of uncertainty terms and that the length of the impression of a report would be a predictor of use of uncertainty terms. Finally, we hypothesized that use of uncertainty terms would often be interpreted by human readers as "hedging." To test these hypotheses, we applied a natural language processing (NLP) algorithm to assess and count the number of uncertainty terms within radiology reports. An algorithm was created to detect usage of a published set of uncertainty terms. All 642,569 radiology report impressions from 171 reporting radiologists were collected from 2011 through 2015. For validation, two radiologists without knowledge of the software algorithm reviewed report impressions and were asked to determine whether the report was "uncertain" or "hedging." The relationship between the presence of 1 or more uncertainty terms and the human readers' assessment was compared. There were significant differences in the proportion of reports containing uncertainty terms across patient admission status and across anatomic imaging subsections. Reports with uncertainty were significantly longer than those without, although report length was not significantly different between subspecialities or modalities. There were no significant differences in rates of uncertainty when comparing the experience of the attending radiologist. When compared with reader 1 as a gold standard, accuracy was 0.91, sensitivity was 0.92, specificity was 0.9, and precision was 0.88, with an F1-score of 0.9. When compared with reader 2, accuracy was 0.84, sensitivity was 0.88, specificity was 0.82, and precision was 0.68, with an F1-score of 0.77. Substantial variability exists among radiologists and subspecialities regarding the use of uncertainty terms, and this variability cannot be explained by years of radiologist experience or differences in proportions of specific modalities. Furthermore, detection of uncertainty terms demonstrates good test characteristics for predicting human readers' assessment of uncertainty.
引用
收藏
页码:1194 / 1201
页数:8
相关论文
共 50 条
  • [1] Between Always and Never: Evaluating Uncertainty in Radiology Reports Using Natural Language Processing
    Andrew L. Callen
    Sara M. Dupont
    Adi Price
    Ben Laguna
    David McCoy
    Bao Do
    Jason Talbott
    Marc Kohli
    Jared Narvid
    Journal of Digital Imaging, 2020, 33 : 1194 - 1201
  • [2] Identification of gallstones from radiology reports using natural language processing
    Fairfield, Cameron
    Ots, Riinu
    Antai, Roseline
    Drake, Tom
    Knight, Stephen
    Wigmore, Stephen
    Harrison, Ewen
    BRITISH JOURNAL OF SURGERY, 2018, 105 : 58 - 58
  • [3] Extracting information on pneumonia in infants using natural language processing of radiology reports
    Mendonça, EA
    Haas, J
    Shagina, L
    Larson, E
    Friedman, C
    JOURNAL OF BIOMEDICAL INFORMATICS, 2005, 38 (04) : 314 - 321
  • [4] A systematic review of natural language processing applied to radiology reports
    Casey, Arlene
    Davidson, Emma
    Poon, Michael
    Dong, Hang
    Duma, Daniel
    Grivas, Andreas
    Grover, Claire
    Suarez-Paniagua, Victor
    Tobin, Richard
    Whiteley, William
    Wu, Honghan
    Alex, Beatrice
    BMC MEDICAL INFORMATICS AND DECISION MAKING, 2021, 21 (01)
  • [5] Natural language processing of radiology reports for the identification of patients with fracture
    Kolanu, Nithin
    Brown, A. Shane
    Beech, Amanda
    Center, Jacqueline R.
    White, Christopher P.
    ARCHIVES OF OSTEOPOROSIS, 2021, 16 (01)
  • [6] Natural Language Processing to identify pneumonia from radiology reports
    Dublin, Sascha
    Baldwin, Eric
    Walker, Rod L.
    Christensen, Lee M.
    Haug, Peter J.
    Jackson, Michael L.
    Nelson, Jennifer C.
    Ferraro, Jeffrey
    Carrell, David
    Chapman, Wendy W.
    PHARMACOEPIDEMIOLOGY AND DRUG SAFETY, 2013, 22 (08) : 834 - 841
  • [7] Natural language processing to identify ureteric stones in radiology reports
    Li, Andrew Yu
    Elliot, Nikki
    JOURNAL OF MEDICAL IMAGING AND RADIATION ONCOLOGY, 2019, 63 (03) : 307 - 310
  • [8] A systematic review of natural language processing applied to radiology reports
    Arlene Casey
    Emma Davidson
    Michael Poon
    Hang Dong
    Daniel Duma
    Andreas Grivas
    Claire Grover
    Víctor Suárez-Paniagua
    Richard Tobin
    William Whiteley
    Honghan Wu
    Beatrice Alex
    BMC Medical Informatics and Decision Making, 21
  • [9] Application of Natural Language Processing and Machine Learning to Radiology Reports
    Jeon, Seoungdeok
    Colburn, Zachary
    Sakai, Joshua
    Hung, Ling-Hong
    Yeung, Ka Yee
    12TH ACM CONFERENCE ON BIOINFORMATICS, COMPUTATIONAL BIOLOGY, AND HEALTH INFORMATICS (ACM-BCB 2021), 2021,
  • [10] Natural language processing of radiology reports for the identification of patients with fracture
    Nithin Kolanu
    A Shane Brown
    Amanda Beech
    Jacqueline R. Center
    Christopher P. White
    Archives of Osteoporosis, 2021, 16