A dedicated BI-RADS training programme: Effect on the inter-observer variation among screening radiologists

被引:33
|
作者
Timmers, J. M. H. [1 ,2 ]
van Doorne-Nagtegaal, H. J. [3 ]
Verbeek, A. L. M. [1 ,2 ]
den Heeten, G. J. [1 ,4 ]
Broeders, M. J. M. [1 ,2 ]
机构
[1] Natl Expert & Training Ctr Breast Canc Screening, NL-6503 GJ Nijmegen, Netherlands
[2] Radboud Univ Nijmegen, Med Ctr, Dept Epidemiol Biostat & HTA, NL-6500 HB Nijmegen, Netherlands
[3] Comprehens Canc Ctr Netherlands IKNL, NL-1006 AE Amsterdam, Netherlands
[4] Univ Amsterdam, Acad Med Ctr, Dept Radiol, NL-100 MD Amsterdam, Netherlands
关键词
BI-RADS; Training; Inter-observer variability; Screening; Mammography; DATA SYSTEM; AMERICAN-COLLEGE; BREAST-CANCER; MAMMOGRAPHY; VARIABILITY; AGREEMENT; EXPERIENCE; EXPERTISE; DIAGNOSIS;
D O I
10.1016/j.ejrad.2011.07.011
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Introduction: The Breast Imaging Reporting and Data System (BI-RADS) was introduced in the Dutch breast cancer screening programme to improve communication between medical specialists. Following introduction, a substantial variation in the use of the BI-RADS lexicon for final assessment categories was noted among screening radiologists. We set up a dedicated training programme to reduce this variation. This study evaluates whether this programme was effective. Materials and methods: Two comparable test sets were read before and after completion of the training programme. Each set contained 30 screening mammograms of referred women selected from screening practice. The sets were read by 25 experienced and 30 new screening radiologists. Cohen's kappa (kappa) was used to calculate the inter-observer agreement. The BI-RADS 2003 version was implemented in the screening programme as the BI-RADS 2008 version requires the availability of diagnostic work-up, and this is unavailable. Results: The inter-observer agreement of all participating radiologists (n = 55) with the expert panel increased from a pre-training kappa-value of 0.44 to a post-training kappa-value of 0.48 (p = 0.14). The inter-observer agreement of the new screening radiologists (n = 30) with the expert panel increased from kappa = 0.41 to kappa = 0.50 (p = 0.01), whereas there was no difference in agreement among the 25 experienced radiologists (from kappa = 0.48 to kappa = 0.46, p = 0.60). Conclusion: Our training programme in the BI-RADS lexicon resulted in a significant improvement of agreement among new screening radiologists. Overall, the agreement among radiologists was moderate (guidelines Landis and Koch). This is in line with results found in the literature. (C) 2011 Elsevier Ireland Ltd. All rights reserved.
引用
收藏
页码:2184 / 2188
页数:5
相关论文
共 43 条
  • [1] Inter-observer variability within BI-RADS and RANZCR mammographic density assessment schemes
    Damases, Christine N.
    Mello-Thoms, Claudia
    McEntee, Mark F.
    MEDICAL IMAGING 2016: IMAGE PERCEPTION, OBSERVER PERFORMANCE, AND TECHNOLOGY ASSESSMENT, 2016, 9787
  • [2] Persistent inter-observer variability of breast density assessment using BI-RADS® 5th edition guidelines
    Portnow, Leah H.
    Georgian-Smith, Dianne
    Haider, Irfanullah
    Barrios, Mirelys
    Bay, Camden P.
    Nelson, Kerrie P.
    Raza, Sughra
    CLINICAL IMAGING, 2022, 83 : 21 - 27
  • [3] COMPARISON OF INTER-OBSERVER VARIABILITY AND DIAGNOSTIC PERFORMANCE OF THE FIFTH EDITION OF BI-RADS FOR BREAST ULTRASOUND OF STATIC VERSUS VIDEO IMAGES
    Youk, Ji Hyun
    Jung, Inkyung
    Yoon, Jung Hyun
    Kim, Sung Hun
    Kim, You Me
    Lee, Eun Hye
    Jeong, Sun Hye
    Kim, Min Jung
    ULTRASOUND IN MEDICINE AND BIOLOGY, 2016, 42 (09): : 2083 - 2088
  • [4] Inter-observer variation in the histological diagnosis of polyps in colorectal cancer screening
    van Putten, Paul G.
    Hol, Lieke
    van Dekken, Herman
    van Krieken, J. Han
    van Ballegooijen, Marjolein
    Kuipers, Ernst J.
    van Leerdam, Monique E.
    HISTOPATHOLOGY, 2011, 58 (06) : 974 - 981
  • [5] Diagnostic accuracy and inter-observer reliability of the O-RADS scoring system among staff radiologists in a North American academic clinical setting
    Yeli Pi
    Mitchell P. Wilson
    Prayash Katlariwala
    Medica Sam
    Thomas Ackerman
    Lee Paskar
    Vimal Patel
    Gavin Low
    Abdominal Radiology, 2021, 46 : 4967 - 4973
  • [6] Reproducibility of BI-RADS Breast Density Measures Among Community Radiologists: A Prospective Cohort Study
    Spayne, Mary C.
    Gard, Charlotte C.
    Skelly, Joan
    Miglioretti, Diana L.
    Vacek, Pamela M.
    Geller, Berta M.
    BREAST JOURNAL, 2012, 18 (04): : 326 - 333
  • [7] Diagnostic accuracy and inter-observer reliability of the O-RADS scoring system among staff radiologists in a North American academic clinical setting
    Pi, Yeli
    Wilson, Mitchell P.
    Katlariwala, Prayash
    Sam, Medica
    Ackerman, Thomas
    Paskar, Lee
    Patel, Vimal
    Low, Gavin
    ABDOMINAL RADIOLOGY, 2021, 46 (10) : 4967 - 4973
  • [8] Inter- and intraradiologist variability in the BI-RADS assessment and breast density categories for screening mammograms
    Redondo, A.
    Comas, M.
    Macia, F.
    Ferrer, F.
    Murta-Nascimento, C.
    Maristany, M. T.
    Molins, E.
    Sala, M.
    Castells, X.
    BRITISH JOURNAL OF RADIOLOGY, 2012, 85 (1019): : 1465 - 1470
  • [9] Effect of training on the inter-observer reliability of lameness scoring in dairy cattle
    March, S.
    Brinkmann, J.
    Winkler, C.
    ANIMAL WELFARE, 2007, 16 (02) : 131 - 133
  • [10] Inter-observer reliability of high-resolution ultrasonography in the assessment of bone erosions in patients with rheumatoid arthritis: experience of an intensive dedicated training programme
    Gutierrez, Marwin
    Filippucci, Emilio
    Ruta, Santiago
    Salaffi, Fausto
    Blasetti, Patrizia
    Di Geso, Luca
    Grassi, Walter
    RHEUMATOLOGY, 2011, 50 (02) : 373 - 380