Interrater Agreement in the Evaluation of Discrepant Imaging Findings With the Radpeer System

被引:54
|
作者
Bender, Leila C. [1 ]
Linnau, Ken F. [1 ]
Meier, Eric N. [2 ]
Anzai, Yoshimi [1 ]
Gunn, Martin L. [1 ]
机构
[1] Univ Washington, Dept Radiol, Seattle, WA 98104 USA
[2] Univ Washington, Dept Biostat, Seattle, WA 98104 USA
关键词
quality assurance; quality improvement; Radpeer; QUALITY IMPROVEMENT; RADIOLOGY QUALITY; PEER ASSESSMENTS; RELIABILITY;
D O I
10.2214/AJR.12.8972
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
OBJECTIVE. The Radpeer system is central to the quality assurance process in many radiology practices. Previous studies have shown poor agreement between physicians in the evaluation of their peers. The purpose of this study was to assess the reliability of the Radpeer scoring system. MATERIALS AND METHODS. A sample of 25 discrepant cases was extracted from our quality assurance database. Images were made anonymous; associated reports and identities of interpreting radiologists were removed. Indications for the studies and descriptions of the discrepancies were provided. Twenty-one subspecialist attending radiologists rated the cases using the Radpeer scoring system. Multirater kappa statistics were used to assess interrater agreement, both with the standard scoring system and with dichotomized scores to reflect the practice of further review for cases rated 3 and 4. Subgroup analyses were conducted to assess subspecialist evaluation of cases. RESULTS. Interrater agreement was slight to fair compared with that expected by chance. For the group of 21 raters, the kappa values were 0.11 (95% CI, 0.06-0.16) with the standard scoring system and 0.20 (95% CI, 0.13-0.27) with dichotomized scores. There was disagreement about whether a discrepancy had occurred in 20 cases. Subgroup analyses did not reveal significant differences in the degree of interrater agreement. CONCLUSION. The identification of discrepant interpretations is valuable for the education of individual radiologists and for larger-scale quality assurance and quality improvement efforts. Our results show that a ratings-based peer review system is unreliable and subjective for the evaluation of discrepant interpretations. Resources should be devoted to developing more robust and objective assessment procedures, particularly those with clear quality improvement goals.
引用
收藏
页码:1320 / 1327
页数:8
相关论文
共 50 条
  • [41] Interrater Agreement of BT-RADS for Evaluation of Follow-up MRI in Patients with Treated Primary Brain Tumor
    Essien, Michael
    Cooper, Maxwell E.
    Gore, Ashwani
    Min, Taejin L.
    Risk, Benjamin B.
    Sadigh, Gelareh
    Hu, Ranliang
    Hoch, Michael J.
    Weinberg, Brent D.
    AMERICAN JOURNAL OF NEURORADIOLOGY, 2024, 45 (09) : 1308 - 1315
  • [42] MR Imaging in the Evaluation of Equivocal Clinical and Imaging Findings of the Breast
    Leung, Jessica W. T.
    MAGNETIC RESONANCE IMAGING CLINICS OF NORTH AMERICA, 2010, 18 (02) : 295 - +
  • [43] Interrater and intrarater agreements of magnetic resonance imaging findings in the lumbar spine: significant variability across degenerative conditions
    Fu, Michael C.
    Buerba, Rafael A.
    Long, William D., III
    Blizzard, Daniel J.
    Lischuk, Andrew W.
    Haims, Andrew H.
    Grauer, Jonathan N.
    SPINE JOURNAL, 2014, 14 (10): : 2442 - 2448
  • [44] Study on 3D High-Resolution Anorectal Manometry Interrater Agreement in the Evaluation of Dyssynergic Defecation Disorders
    van Oostendorp, Justin Y.
    van Hagen, Pieter
    van der Mijnsbrugge, Grietje J. H.
    Han-Geurts, Ingrid J. M.
    DIAGNOSTICS, 2023, 13 (16)
  • [45] Interrater agreement of emergency medicine milestone levels: resident self-evaluation vs clinical competency committee consensus
    Tichter, Aleksandr M.
    Mulcare, Mary R.
    Carter, Wallace A.
    AMERICAN JOURNAL OF EMERGENCY MEDICINE, 2016, 34 (08): : 1677 - 1679
  • [46] Evaluation of patients with jejunostomy tubes: Imaging findings
    Carucci, LR
    Levine, MS
    Rubesin, SE
    Laufer, I
    Assad, S
    Herlinger, H
    RADIOLOGY, 2002, 223 (01) : 241 - 247
  • [47] Agreement Between Magnetic Resonance Imaging and Pathologic Findings in the Tumor Size Evaluation Before and After Neoadjuvant Chemotherapy Treatment: A Prospective Study
    Cheng, Jinlong
    Hou, Yan
    Li, Junnan
    Wang, Wenjie
    Xu, Ye
    Yang, Chunyan
    Ma, Libing
    Dhillon, Harman S.
    Openkova, Margarita S.
    Fan, Xin
    Li, Zhenzi
    Li, Kang
    Lou, Ge
    INTERNATIONAL JOURNAL OF GYNECOLOGICAL CANCER, 2017, 27 (07) : 1472 - 1479
  • [48] Validity of and interrater agreement on the LINNEAUS Euro-PC medication safety incident classification system in primary care in Poland
    Kosiek, Katarzyna
    Voegele, Anna
    Lainer, Miriam
    Soennichsen, Andreas
    Bowie, Paul
    Godycki-Cwirko, Maciek
    JOURNAL OF EVALUATION IN CLINICAL PRACTICE, 2014, 20 (04) : 369 - 374
  • [49] Comparison of agreement of cervical spine degenerative pathology findings in magnetic resonance imaging studies
    Fu, Michael C.
    Webb, Matthew L.
    Buerba, Rafael A.
    Neway, William E.
    Brown, J. Elliott
    Trivedi, Mitesh
    Lischuk, Andrew W.
    Haims, Andrew H.
    Grauer, Jonathan N.
    SPINE JOURNAL, 2016, 16 (01): : 42 - 48
  • [50] Discrepant MR and [F-18] Fluoroethyl-L-Tyrosine PET Imaging Findings in a Patient with Bevacizumab Failure
    Galldiks, Norbert
    Filss, Christian P.
    Goldbrunner, Roland
    Langen, Karl-Josef
    CASE REPORTS IN ONCOLOGY, 2012, 5 (03): : 490 - 494