A study examining inter-rater and intrarater reliability of a novel instrument for assessment of psoriasis: the Copenhagen Psoriasis Severity Index

被引:27
|
作者
Berth-Jones, J. [1 ]
Thompson, J. [2 ]
Papp, K. [3 ]
机构
[1] Univ Hosp Coventry & Warwickshire NHS Trust, Dept Dermatol, Coventry CV2 2DX, W Midlands, England
[2] Univ Leicester, Dept Hlth Sci, Leicester LE1 7RH, Leics, England
[3] Prob Med Res, Waterloo, ON, Canada
关键词
Copenhagen Psoriasis Severity Index; intraclass correlation coefficient; Physician's Global Assessment; psoriasis; Psoriasis Area and Severity Index; severity score;
D O I
10.1111/j.1365-2133.2008.08680.x
中图分类号
R75 [皮肤病学与性病学];
学科分类号
100206 ;
摘要
Background There is a perceived need for a better method for clinical assessment of the severity of psoriasis vulgaris. The most frequently used system is the Psoriasis Area and Severity Index (PASI), which has significant disadvantages, including the requirement for assessment of the percentage of skin affected, an inability to separate milder cases, and a lack of linearity. The Copenhagen Psoriasis Severity Index (CoPSI) is a novel approach which comprises assessment of three signs: erythema, plaque thickness and scaling, each on a four-point scale (0, none; 1, mild; 2, moderate; 3, severe), at each of 10 sites: face, scalp, upper limbs (excluding hands and wrists), hands and wrists, chest and abdomen, back, buttocks and sacral area, genitalia, lower limbs (excluding feet and ankles), feet and ankles. Objectives To evaluate the inter-rater and intrarater reliability of the CoPSI and to provide comparative data from the PASI and a Physician's Global Assessment (PGA) used in recent clinical trials on psoriasis vulgaris. Methods On the day before the study, 14 dermatologists (raters) with an interest in psoriasis participated in a detailed training session and discussion (2.5 h) on use of the scales. On the study day, each rater evaluated 16 adults with chronic plaque psoriasis in the morning and again in the afternoon. Raters were randomly assigned to assess subjects using the scales in a specific sequence, either PGA, CoPSI, PASI or PGA, PASI, CoPSI. Each rater used one sequence in the morning and the other in the afternoon. The primary endpoint was the inter-rater and intrarater reliability as determined by intraclass correlation coefficients (ICCs). Results All three scales demonstrated 'substantial' (a priori defined as ICC > 80%) intrarater reliability. The inter-rater reliability for each of the CoPSI and PASI was also 'substantial' and for the PGA was 'moderate' (ICC 61%). The CoPSI was better at distinguishing between milder cases. Conclusions The CoPSI and the PASI both provided reproducible psoriasis severity assessments. In terms of both intrarater and inter-rater reliability values, the CoPSI and the PASI are superior to the PGA. The CoPSI may overcome several of the problems associated with the PASI. In particular, the CoPSI avoids the need to estimate a percentage of skin involved, is able to separate milder cases where the PASI lacks sensitivity, and is also more linear and simpler. The CoPSI also incorporates more meaningful weighting of different anatomical areas.
引用
收藏
页码:407 / 412
页数:6
相关论文
共 50 条
  • [31] Job matching assessment: Inter-rater reliability of an instrument assessing employment characteristics of young adults with intellectual disabilities
    Morgan, Robert
    [J]. JOURNAL OF VOCATIONAL REHABILITATION, 2011, 34 (01) : 25 - 33
  • [32] Assessment of Impulsive Aggression in Patients with Severe Mental Disorders and Demonstrated Violence: Inter-Rater Reliability of Rating Instrument
    Felthous, Alan R.
    Weaver, Doris
    Evans, Roy
    Braik, Shukry
    Stanford, Matthew S.
    Johnson, Richard
    Metzger, Carole
    Bazile, Anita
    Barratt, Ernest
    [J]. JOURNAL OF FORENSIC SCIENCES, 2009, 54 (06) : 1470 - 1474
  • [33] Assessment of frailty by paramedics using the clinical frailty scale - an inter-rater reliability and accuracy study
    Fehlmann, Christophe A.
    Stuby, Loric
    Graf, Christophe
    Genoud, Matthieu
    Rigney, Rebecca
    Goldstein, Judah
    Eagles, Debra
    Suppan, Laurent
    [J]. BMC EMERGENCY MEDICINE, 2023, 23 (01)
  • [34] Ultrasound assessment for grading structural tendon changes in supraspinatus tendinopathy: an inter-rater reliability study
    Ingwersen, Kim Gordon
    Hjarbaek, John
    Eshoej, Henrik
    Larsen, Camilla Marie
    Vobbe, Jette
    Juul-Kristensen, Birgit
    [J]. BMJ OPEN, 2016, 6 (05):
  • [35] Assessment of frailty by paramedics using the clinical frailty scale - an inter-rater reliability and accuracy study
    Christophe A. Fehlmann
    Loric Stuby
    Christophe Graf
    Matthieu Genoud
    Rebecca Rigney
    Judah Goldstein
    Debra Eagles
    Laurent Suppan
    [J]. BMC Emergency Medicine, 23
  • [36] Validity and inter-rater reliability of the Lindop Parkinson's Disease Mobility Assessment: a preliminary study
    Pearson, M. J. T.
    Lindop, F. A.
    Mockett, S. P.
    Saunders, L.
    [J]. PHYSIOTHERAPY, 2009, 95 (02) : 126 - 133
  • [37] Inter-rater reliability of a novel neurobehavioral scale for outcomes assessment in rats following traumatic brain injury
    Loadholt, Chary D.
    Larson, Brett E.
    Andriakos, Peter G.
    Boas, Stefan
    Trahan, Tabitha E.
    Tran, Tram L.
    Falls, William A.
    Hammack, Sayamwong E.
    Freeman, Kalev
    [J]. FASEB JOURNAL, 2011, 25
  • [38] Reliability (Inter-rater Agreement) of the Barthel Index for Assessment of Stroke Survivors Systematic Review and Meta-analysis
    Duffy, Laura
    Gajree, Shelley
    Langhorne, Peter
    Stott, David J.
    Quinn, Terence J.
    [J]. STROKE, 2013, 44 (02) : 462 - 468
  • [39] Inter-rater reliability and validity of risk of bias instrument for non-randomized studies of exposures: a study protocol
    Jeyaraman, Maya M.
    Al-Yousif, Nameer
    Robson, Reid C.
    Copstein, Leslie
    Balijepalli, Chakrapani
    Hofer, Kimberly
    Fazeli, Mir S.
    Ansari, Mohammed T.
    Tricco, Andrea C.
    Rabbani, Rasheda
    Abou-Setta, Ahmed M.
    [J]. SYSTEMATIC REVIEWS, 2020, 9 (01)
  • [40] Inter-rater reliability and validity of risk of bias instrument for non-randomized studies of exposures: a study protocol
    Maya M. Jeyaraman
    Nameer Al-Yousif
    Reid C. Robson
    Leslie Copstein
    Chakrapani Balijepalli
    Kimberly Hofer
    Mir S. Fazeli
    Mohammed T. Ansari
    Andrea C. Tricco
    Rasheda Rabbani
    Ahmed M. Abou-Setta
    [J]. Systematic Reviews, 9