Examining the normality assumption of a design-comparable effect size in single-case designs

被引:3
|
作者
Chen, Li-Ting [1 ]
Chen, Yi-Kai [2 ]
Yang, Tong-Rong [2 ]
Chiang, Yu-Shan [3 ]
Hsieh, Cheng-Yu [2 ,4 ]
Cheng, Che [2 ]
Ding, Qi-Wen [5 ]
Wu, Po-Ju [6 ]
Peng, Chao-Ying Joanne [2 ,6 ]
机构
[1] Univ Nevada, Dept Educ Studies, Reno, NV 89557 USA
[2] Natl Taiwan Univ, Dept Psychol, Taipei, Taiwan
[3] Indiana Univ Bloomington, Dept Curriculum & Instruct, Bloomington, IN USA
[4] Univ London, Royal Holloway, Dept Psychol, Egham, England
[5] Acad Sinica, Inst Sociol, Taipei, Taiwan
[6] Indiana Univ Bloomington, Dept Counseling & Educ Psychol, Bloomington, IN USA
关键词
Single-case; Intervention; Standardized mean difference; Effect size; Design comparable; Normality; MULTIPLE-BASE-LINE; DIFFERENCE EFFECT SIZE; MONTE-CARLO; MAXIMUM-LIKELIHOOD; MULTILEVEL MODELS; SUBJECT RESEARCH; INTERVENTION; SAMPLE; METAANALYSIS; VIOLATIONS;
D O I
10.3758/s13428-022-02035-8
中图分类号
B841 [心理学研究方法];
学科分类号
040201 ;
摘要
What Works Clearinghouse (WWC, 2022) recommends a design-comparable effect size (D-CES; i.e., g(AB)) to gauge an intervention in single-case experimental design (SCED) studies, or to synthesize findings in meta-analysis. So far, no research has examined g(AB)'s performance under non-normal distributions. This study expanded Pustejovsky et al. (2014) to investigate the impact of data distributions, number of cases (m), number of measurements (N), within-case reliability or intra-class correlation (rho), ratio of variance components (lambda), and autocorrelation (phi) on g(AB) in multiple-baseline (MB) design. The performance of g(AB) was assessed by relative bias (RB), relative bias of variance (RBV), MSE, and coverage rate of 95% CIs (CR). Findings revealed that g(AB) was unbiased even under non-normal distributions. g(AB)'s variance was generally overestimated, and its 95% CI was over-covered, especially when distributions were normal or nearly normal combined with small m and N. Large imprecision of g(AB) occurred when m was small and rho was large. According to the ANOVA results, data distributions contributed to approximately 49% of variance in RB and 25% of variance in both RBV and CR. m and rho each contributed to 34% of variance in MSE. We recommend g(AB) for MB studies and meta-analysis with N >= 16 and when either (1) data distributions are normal or nearly normal, m = 6, and rho = 0.6 or 0.8, or (2) data distributions are mildly or moderately non-normal, m >= 4, and rho = 0.2, 0.4, or 0.6. The paper concludes with a discussion of g(AB)'s applicability and design-comparability, and sound reporting practices of ES indices.
引用
收藏
页码:379 / 405
页数:27
相关论文
共 50 条
  • [41] Factors Affecting Visual Inference in Single-Case Designs
    Ximenes, Veronica M.
    Manolov, Rumen
    Solanas, Antonio
    Quera, Vicenc
    SPANISH JOURNAL OF PSYCHOLOGY, 2009, 12 (02): : 823 - 832
  • [42] Computing Tools for Implementing Standards for Single-Case Designs
    Chen, Li-Ting
    Peng, Chao-Ying Joanne
    Chen, Ming-E
    BEHAVIOR MODIFICATION, 2015, 39 (06) : 835 - 869
  • [43] SINGLE-CASE RESEARCH DESIGNS FOR THE CREATIVE ART THERAPIST
    ALDRIDGE, D
    ARTS IN PSYCHOTHERAPY, 1994, 21 (05): : 333 - 342
  • [44] Defining and assessing immediacy in single-case experimental designs
    Manolov, Rumen
    Onghena, Patrick
    JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR, 2022, 118 (03) : 462 - 492
  • [45] Single-case experimental designs: Reflections on conduct and analysis
    Manolov, Rumen
    Gast, David L.
    Perdices, Michael
    Evans, Jonathan J.
    NEUROPSYCHOLOGICAL REHABILITATION, 2014, 24 (3-4) : 634 - 660
  • [46] Analysis of single-case data: Randomisation tests for measures of effect size
    Heyvaert, Mieke
    Onghena, Patrick
    NEUROPSYCHOLOGICAL REHABILITATION, 2014, 24 (3-4) : 507 - 527
  • [47] Testing for a deficit in single-case studies: Effects of departures from normality
    Crawford, JR
    Garthwaite, PH
    Azzalini, A
    Howell, DC
    Laws, KR
    NEUROPSYCHOLOGIA, 2006, 44 (04) : 666 - 677
  • [48] An Improved Effect Size for Single-Case Research: Nonoverlap of All Pairs
    Parker, Richard I.
    Vannest, Kimberly
    BEHAVIOR THERAPY, 2009, 40 (04) : 357 - 367
  • [49] Effect Size in Single-Case Research: A Review of Nine Nonoverlap Techniques
    Parker, Richard I.
    Vannest, Kimberly J.
    Davis, John L.
    BEHAVIOR MODIFICATION, 2011, 35 (04) : 303 - 322
  • [50] Calculating Effect Size in Single-Case Research: A Comparison of Nonoverlap Methods
    Lenz, A. Stephen
    MEASUREMENT AND EVALUATION IN COUNSELING AND DEVELOPMENT, 2013, 46 (01) : 64 - 73