Interrater Agreement in the Evaluation of Discrepant Imaging Findings With the Radpeer System

被引:54
|
作者
Bender, Leila C. [1 ]
Linnau, Ken F. [1 ]
Meier, Eric N. [2 ]
Anzai, Yoshimi [1 ]
Gunn, Martin L. [1 ]
机构
[1] Univ Washington, Dept Radiol, Seattle, WA 98104 USA
[2] Univ Washington, Dept Biostat, Seattle, WA 98104 USA
关键词
quality assurance; quality improvement; Radpeer; QUALITY IMPROVEMENT; RADIOLOGY QUALITY; PEER ASSESSMENTS; RELIABILITY;
D O I
10.2214/AJR.12.8972
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
OBJECTIVE. The Radpeer system is central to the quality assurance process in many radiology practices. Previous studies have shown poor agreement between physicians in the evaluation of their peers. The purpose of this study was to assess the reliability of the Radpeer scoring system. MATERIALS AND METHODS. A sample of 25 discrepant cases was extracted from our quality assurance database. Images were made anonymous; associated reports and identities of interpreting radiologists were removed. Indications for the studies and descriptions of the discrepancies were provided. Twenty-one subspecialist attending radiologists rated the cases using the Radpeer scoring system. Multirater kappa statistics were used to assess interrater agreement, both with the standard scoring system and with dichotomized scores to reflect the practice of further review for cases rated 3 and 4. Subgroup analyses were conducted to assess subspecialist evaluation of cases. RESULTS. Interrater agreement was slight to fair compared with that expected by chance. For the group of 21 raters, the kappa values were 0.11 (95% CI, 0.06-0.16) with the standard scoring system and 0.20 (95% CI, 0.13-0.27) with dichotomized scores. There was disagreement about whether a discrepancy had occurred in 20 cases. Subgroup analyses did not reveal significant differences in the degree of interrater agreement. CONCLUSION. The identification of discrepant interpretations is valuable for the education of individual radiologists and for larger-scale quality assurance and quality improvement efforts. Our results show that a ratings-based peer review system is unreliable and subjective for the evaluation of discrepant interpretations. Resources should be devoted to developing more robust and objective assessment procedures, particularly those with clear quality improvement goals.
引用
收藏
页码:1320 / 1327
页数:8
相关论文
共 50 条
  • [1] Evaluating "Misses" in Radiology: Interrater Agreement in the American College of Radiology RadPeer System
    Bender, L.
    Linnau, K.
    Meier, E.
    Anzai, Y.
    Gunn, M.
    AMERICAN JOURNAL OF ROENTGENOLOGY, 2012, 198 (05)
  • [2] Rorschach interrater agreement estimates: An empirical evaluation
    Gronnerod, C
    SCANDINAVIAN JOURNAL OF PSYCHOLOGY, 1999, 40 (02) : 115 - 120
  • [3] Interrater agreement in the clinical evaluation of acute pediatric asthma
    Stevens, MW
    Gorelick, MH
    Schultz, T
    JOURNAL OF ASTHMA, 2003, 40 (03) : 311 - 315
  • [4] Interrater Agreement Evaluation: A Latent Variable Modeling Approach
    Raykov, Tenko
    Dimitrov, Dimiter M.
    von Eye, Alexander
    Marcoulides, George A.
    EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT, 2013, 73 (03) : 512 - 531
  • [5] Interrater reliability and agreement of the liver imaging reporting and data system (LI-RADS) v2018 for the evaluation of hepatic lesions
    Abdelrahman, Ahmed S.
    Madkour, Sherihan S.
    Ekladious, Mena E. Y.
    POLISH JOURNAL OF RADIOLOGY, 2022, 87 : E316 - E324
  • [6] Prospective evaluation of interrater agreement between EEG technologists and neurophysiologists
    Isabelle Beuchat
    Senubia Alloussi
    Philipp S. Reif
    Nora Sterlepper
    Felix Rosenow
    Adam Strzelczyk
    Scientific Reports, 11
  • [7] Prospective evaluation of interrater agreement between EEG technologists and neurophysiologists
    Beuchat, Isabelle
    Alloussi, Senubia
    Reif, Philipp S.
    Sterlepper, Nora
    Rosenow, Felix
    Strzelczyk, Adam
    SCIENTIFIC REPORTS, 2021, 11 (01)
  • [8] Telemedicine in emergency evaluation of acute stroke -: Interrater agreement in remote video examination with a novel multimedia system
    Handschu, R
    Littmann, R
    Reulbach, U
    Gaul, C
    Heckmann, JG
    Neundörfer, B
    Scibor, M
    STROKE, 2003, 34 (12) : 2842 - 2846
  • [9] A computerized medical incident reporting system for errors in the intensive care unit: Initial evaluation of interrater agreement
    Kobus, DA
    Amundson, D
    Moses, JD
    Rascona, D
    Gubler, KD
    MILITARY MEDICINE, 2001, 166 (04) : 350 - 353
  • [10] INTERRATER AGREEMENT IN EVALUATION OF STROKE PATIENTS WITH THE UNIFIED NEUROLOGICAL STROKE SCALE
    TREVES, TA
    KAREPOV, VG
    ARONOVICH, BD
    GORBULEV, AY
    BORNSTEIN, NM
    KORCZYN, AD
    STROKE, 1994, 25 (06) : 1263 - 1264