Inter-rater reliability of the radiographic assessment of simple bone cysts

被引:3
|
作者
Cho, S. [1 ]
Yankanah, R. [1 ]
Babyn, P. [2 ,3 ]
Stimec, J. [4 ]
Doria, A. S. [4 ]
Stephens, D. [5 ]
Wright, J. G. [1 ]
机构
[1] Hosp Sick Children, Dept Surg, Toronto, ON, Canada
[2] Univ Saskatchewan, Dept Med Imaging, Saskatoon, SK, Canada
[3] Saskatoon Hlth Reg, Saskatoon, SK, Canada
[4] Hosp Sick Children, Dept Diagnost Imaging, Toronto, ON, Canada
[5] Hosp Sick Children, Res Inst, Toronto, ON, Canada
基金
加拿大健康研究院;
关键词
simple bone cyst; unicameral bone cyst; bone cyst; inter-rater reliability; radiography; SAMPLE-SIZE; KAPPA; MARROW; RISK; ENCHONDROMA; FRACTURES; INJECTION; AGREEMENT; SECONDARY;
D O I
10.1302/1863-2548.13.180140
中图分类号
R826.8 [整形外科学]; R782.2 [口腔颌面部整形外科学]; R726.2 [小儿整形外科学]; R62 [整形外科学(修复外科学)];
学科分类号
摘要
Purpose To develop and evaluate the reliability of an explicit set of parameters and criteria for simple bone cysts (SBCs) and evaluate the reliability of single versus serial chronological reading methods. Methods Radiographic criteria were developed based on the literature and expert consensus. A single anteroposterior/lateral radiograph from 32 subjects with SBC were evaluated by three radiologists. A second reading was then conducted using revised criteria including a visual schematic. In the third reading the same images were assessed but radiologists had access to images from two additional time points. Inter-rater reliability was assessed after each reading using kappa (kappa) and percentage agreement for categorical and binary parameters and intra-class correlation coefficient (ICC) for continuous parameters. Results Parameters that were revised with more explicit definitions including the visual schematic demonstrated consistent or improved inter-rater reliability with the exception of continuous cortical rim present and cyst location in the metaphysis and mid-diaphysis. Cortical rim displayed only slight reliability throughout (kappa = -0.008 to 0.16). All other categorical parameters had a percentage agreement above 0.8 or a moderate (kappa = 0.41 to 0.60), substantial (kappa = 0.61 to 0.80) or almost perfect inter-rater reliability (kappa = 0.81 to 1.0) in at least one reading. All continuous parameters demonstrated excellent inter-rater reliability (ICC > 0.75) in at least one reading with the exception of scalloping (ICC = 0.37 to 0.70). Inter-rater reliability values did not indicate an obviously superior method of assessment between single and serial chronological readings. Conclusion Explicit criteria for SBC parameters used in their assessment demonstrated improved and substantial inter-rater reliability. Inter-rater reliability did not differ between single and serial chronological readings.
引用
收藏
页码:226 / 235
页数:10
相关论文
共 50 条
  • [1] ASSESSMENT OF INTER-RATER RELIABILITY
    BEARD, K
    STEWART, DA
    AGE AND AGEING, 1989, 18 (05) : 354 - 354
  • [2] Comparison between Inter-rater Reliability and Inter-rater Agreement in Performance Assessment
    Liao, Shih Chieh
    Hunt, Elizabeth A.
    Chen, Walter
    ANNALS ACADEMY OF MEDICINE SINGAPORE, 2010, 39 (08) : 613 - 618
  • [3] SIMPLE DEVICE FOR IMPROVING INTER-RATER RELIABILITY
    MCQUEEN, WM
    BEHAVIOR THERAPY, 1975, 6 (01) : 128 - 129
  • [4] Moderate to Substantial Inter-Rater Reliability in the Assessment of Cranial Bone Mobility Restrictions
    Demers, Kyla
    Morin, Chantal
    Collette, Louise
    DeMont, Richard
    JOURNAL OF ALTERNATIVE AND COMPLEMENTARY MEDICINE, 2021, 27 (03) : 263 - 272
  • [5] The Inter-Rater Reliability of Technical Skills Assessment and Retention of Rater Training
    Gawad, Nada
    Fowler, Amanda
    Mimeault, Richard
    Raiche, Isabelle
    JOURNAL OF SURGICAL EDUCATION, 2019, 76 (04) : 1088 - 1093
  • [6] Inter-Rater Reliability of the Modified Radiographic Union Score for Diaphyseal Tibial Fractures With Bone Defects
    Mitchell, Stuart L.
    Obremskey, William T.
    Luly, Jason
    Bosse, Michael J.
    Frey, Katherine P.
    Hsu, Joseph R.
    MacKenzie, Ellen J.
    Morshed, Saam
    O'Toole, Robert, V
    Scharfstein, Daniel O.
    Tornetta, Paul, III
    Hylden, Christina M.
    Stinner, Daniel J.
    McArthur, Erica Andrews
    Hurst, Kaitlyn M.
    Sample, Katherine
    Henderson, Corey
    Schmidt, Andrew H.
    Westberg, Jerald R.
    Hymes, Robert A.
    Ramsey, Lolita
    Vallier, Heather A.
    Breslin, Mary A.
    Zych, Gregory A.
    Zych, Gabriela M.
    Cannada, Lisa K.
    Hassan, R.
    Serrano, Rafael
    Steverson, Barbara
    Dubina, Andrew G.
    Howe, Andrea
    Teague, David
    Miclau, Theodore
    Archer, Kristin R.
    Attum, Basem
    Boyce, Robert H.
    Gajari, Vamshi
    Jahangir, A. Alex
    Rodriguez-Buitrago, Andres
    Sethi, Manish K.
    Carroll, Eben A.
    Pilson, Holly
    Weinschenk, Robert C.
    Holden, Martha B.
    Potter, Benjamin K.
    Ceniceros, Xochitl
    D'Alleyr, Jean-Claude G.
    Gordon, Wade T.
    Lucio, Whitley B.
    Waggoner, Sandra L.
    JOURNAL OF ORTHOPAEDIC TRAUMA, 2019, 33 (06) : 301 - 307
  • [7] Testing the Reliability of Inter-Rater Reliability
    Eagan, Brendan
    Brohinsky, Jais
    Wang, Jingyi
    Shaffer, David Williamson
    LAK20: THE TENTH INTERNATIONAL CONFERENCE ON LEARNING ANALYTICS & KNOWLEDGE, 2020, : 454 - 461
  • [8] Measuring Essay Assessment: Intra-rater and Inter-rater Reliability
    Kayapinar, Ulas
    EURASIAN JOURNAL OF EDUCATIONAL RESEARCH, 2014, (57): : 113 - 135
  • [9] Inter-rater reliability of the Play Assessment for Group Settings
    Lautamo, Tiina
    Heikkila, Maija
    SCANDINAVIAN JOURNAL OF OCCUPATIONAL THERAPY, 2011, 18 (01) : 3 - 10
  • [10] OWAS inter-rater reliability
    Lins, Christian
    Fudickar, Sebastian
    Hein, Andreas
    APPLIED ERGONOMICS, 2021, 93