Examining the normality assumption of a design-comparable effect size in single-case designs

被引:3
|
作者
Chen, Li-Ting [1 ]
Chen, Yi-Kai [2 ]
Yang, Tong-Rong [2 ]
Chiang, Yu-Shan [3 ]
Hsieh, Cheng-Yu [2 ,4 ]
Cheng, Che [2 ]
Ding, Qi-Wen [5 ]
Wu, Po-Ju [6 ]
Peng, Chao-Ying Joanne [2 ,6 ]
机构
[1] Univ Nevada, Dept Educ Studies, Reno, NV 89557 USA
[2] Natl Taiwan Univ, Dept Psychol, Taipei, Taiwan
[3] Indiana Univ Bloomington, Dept Curriculum & Instruct, Bloomington, IN USA
[4] Univ London, Royal Holloway, Dept Psychol, Egham, England
[5] Acad Sinica, Inst Sociol, Taipei, Taiwan
[6] Indiana Univ Bloomington, Dept Counseling & Educ Psychol, Bloomington, IN USA
关键词
Single-case; Intervention; Standardized mean difference; Effect size; Design comparable; Normality; MULTIPLE-BASE-LINE; DIFFERENCE EFFECT SIZE; MONTE-CARLO; MAXIMUM-LIKELIHOOD; MULTILEVEL MODELS; SUBJECT RESEARCH; INTERVENTION; SAMPLE; METAANALYSIS; VIOLATIONS;
D O I
10.3758/s13428-022-02035-8
中图分类号
B841 [心理学研究方法];
学科分类号
040201 ;
摘要
What Works Clearinghouse (WWC, 2022) recommends a design-comparable effect size (D-CES; i.e., g(AB)) to gauge an intervention in single-case experimental design (SCED) studies, or to synthesize findings in meta-analysis. So far, no research has examined g(AB)'s performance under non-normal distributions. This study expanded Pustejovsky et al. (2014) to investigate the impact of data distributions, number of cases (m), number of measurements (N), within-case reliability or intra-class correlation (rho), ratio of variance components (lambda), and autocorrelation (phi) on g(AB) in multiple-baseline (MB) design. The performance of g(AB) was assessed by relative bias (RB), relative bias of variance (RBV), MSE, and coverage rate of 95% CIs (CR). Findings revealed that g(AB) was unbiased even under non-normal distributions. g(AB)'s variance was generally overestimated, and its 95% CI was over-covered, especially when distributions were normal or nearly normal combined with small m and N. Large imprecision of g(AB) occurred when m was small and rho was large. According to the ANOVA results, data distributions contributed to approximately 49% of variance in RB and 25% of variance in both RBV and CR. m and rho each contributed to 34% of variance in MSE. We recommend g(AB) for MB studies and meta-analysis with N >= 16 and when either (1) data distributions are normal or nearly normal, m = 6, and rho = 0.6 or 0.8, or (2) data distributions are mildly or moderately non-normal, m >= 4, and rho = 0.2, 0.4, or 0.6. The paper concludes with a discussion of g(AB)'s applicability and design-comparability, and sound reporting practices of ES indices.
引用
收藏
页码:379 / 405
页数:27
相关论文
共 50 条
  • [1] Examining the normality assumption of a design-comparable effect size in single-case designs
    Li-Ting Chen
    Yi-Kai Chen
    Tong-Rong Yang
    Yu-Shan Chiang
    Cheng-Yu Hsieh
    Che Cheng
    Qi-Wen Ding
    Po-Ju Wu
    Chao-Ying Joanne Peng
    Behavior Research Methods, 2024, 56 : 379 - 405
  • [2] Examining the Impact of Design-Comparable Effect Size on the Analysis of Single-Case Design in Special Education
    King, Seth A.
    Nylen, Brendon
    Enders, Olivia
    Wang, Lanqi
    Opeoluwa, Oluwatosin
    SCHOOL PSYCHOLOGY, 2024,
  • [3] Design-Comparable Effect Sizes in Multiple Baseline Designs: A General Modeling Framework
    Pustejovsky, James E.
    Hedges, Larry V.
    Shadish, William R.
    JOURNAL OF EDUCATIONAL AND BEHAVIORAL STATISTICS, 2014, 39 (05) : 368 - 393
  • [4] Comparing "Visual'' Effect Size Indices for Single-Case Designs
    Manolov, Rumen
    Solanas, Antonio
    Leiva, David
    METHODOLOGY-EUROPEAN JOURNAL OF RESEARCH METHODS FOR THE BEHAVIORAL AND SOCIAL SCIENCES, 2010, 6 (02) : 49 - 58
  • [5] An effect size measure and Bayesian analysis of single-case designs
    Swaminathan, Hariharan
    Rogers, H. Jane
    Horner, Robert H.
    JOURNAL OF SCHOOL PSYCHOLOGY, 2014, 52 (02) : 213 - 230
  • [6] An Improved Rank Correlation Effect Size Statistic for Single-Case Designs: Baseline Corrected Tau
    Tarlow, Kevin R.
    BEHAVIOR MODIFICATION, 2017, 41 (04) : 427 - 467
  • [7] Autocorrelation and estimates of treatment effect size for single-case experimental design data
    Barnard-Brak, Lucy
    Watkins, Laci
    Richman, David M.
    BEHAVIORAL INTERVENTIONS, 2021, 36 (03) : 595 - 605
  • [8] Single-Case Design Effect-Size Distributions: Association With Procedural Parameters
    Ledford, Jennifer R.
    Eyler, Paige B.
    Windsor, Sienna A.
    Chow, Jason C.
    SCHOOL PSYCHOLOGY, 2024,
  • [9] A Priori Justification for Effect Measures in Single-Case Experimental Designs
    Manolov, Rumen
    Moeyaert, Mariola
    Fingerhut, Joelle E.
    PERSPECTIVES ON BEHAVIOR SCIENCE, 2022, 45 (01) : 153 - 186
  • [10] The abuse and neglect of single-case designs
    Mattaini, MA
    RESEARCH ON SOCIAL WORK PRACTICE, 1996, 6 (01) : 83 - 90