共 19 条
Assessment of the reliability of the Johns Hopkins/agency, for healthcare research and quality hospital disaster drill evaluation tool
被引:37
|作者:
Kaji, Amy H.
[1
,2
,3
,4
]
Lewis, Roger J.
[1
,2
,3
]
机构:
[1] Harbor UCLA Med Ctr, Dept Emergency Med, Torrance, CA 90509 USA
[2] Univ Calif Los Angeles, David Geffen Sch Med, Los Angeles, CA 90095 USA
[3] Los Angeles Biomed Res Inst, Torrance, CA USA
[4] Harbor UCLA Med Ctr, S Bay Disaster Resource Ctr, Los Angeles, CA USA
基金:
美国医疗保健研究与质量局;
关键词:
D O I:
10.1016/j.annemergmed.2007.07.025
中图分类号:
R4 [临床医学];
学科分类号:
1002 ;
100602 ;
摘要:
Study objective: The Joint Commission requires hospitals to implement 2 disaster drills per year to test the response phase of their emergency management plans. Despite this requirement, there is no direct evidence that such drills improve disaster response. Furthermore, there is no generally accepted, validated tool to evaluate hospital performance during disaster drills. We characterize the internal and interrater reliability of a hospital disaster drill performance evaluation tool developed by the Johns Hopkins University Evidence-based Practice Center, under contract from the Agency for Healthcare Research and Quality (AHRQ). Methods: We evaluated the reliability of the Johns Hopkins/AHRQ drill performance evaluation tool by applying it to multiple hospitals in Los Angeles County, CA, participating in the November 2005 California statewide disaster drill. Thirty-two fourth-year medical student observers were deployed to specific zones (incident command, triage, treatment, and decontamination) in participating hospitals. Each observer completed common tool items, as well as tool items specific to their hospital zone. Two hundred items from the tool were dichotomously coded as indicating better versus poorer preparedness. An unweighted "raw performance" score was calculated by summing these dichotomous indicators. To quantify internal reliability, we calculated the Kuder-Richardson interitem consistency coefficient, and to assess interrater reliability, we computed the K coefficient for each of the 11 pairs of observers who were deployed within the same hospital and zone. Results: Of 17 invited hospitals, 6 agreed to participate. The raw performance scores for the 94 common items ranged from 18 (19%) to 63 (67%) across hospitals and zones. The raw performance scores of zone-specific items ranged from 14 of 45 (31%) to 30 of 45 (67%) in the incident command zone, from 2 of 17 (12%) to 15 of 17 (88%) in the triage zone, from 19 of 26 (73%) to 22 of 26 (85%) in the treatment zone, and from 2 of 18 (11%) to 10 of 18 (56%) in the decontamination zone. The Kuder-Richardson internal reliability, by zone, ranged from 0.72 (95% confidence interval [Cl] 0.58 to 0.87) in the treatment zone to 0.97 (95% Cl 0.95 to 0.99) in the incident command zone. The interrater reliability ranged, across hospital zones, from 0.24 (95% Cl 0.09 to 0.38) to 0.72 (95% Cl 0.63 to 0.81) for the 11 pairs of observers. Conclusion: We found a high degree of internal reliability in the AHRQ instrument's items, suggesting the underlying construct of hospital preparedness is valid. Conversely, we found substantial variability in interrater reliability, suggesting that the instrument needs revision or substantial user training, as well as verification of interrater reliability in a particular setting before use.
引用
收藏
页码:204 / 210
页数:7
相关论文