Using explanatory item response models to analyze group differences in science achievement

被引:25
|
作者
Briggs, Derek C. [1 ]
机构
[1] Univ Colorado, Boulder, CO 80309 USA
关键词
D O I
10.1080/08957340801926086
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
This article illustrates the use of an explanatory item response modeling (EIRM) approach in the context of measuring group differences in science achievement. The distinction between item response models and EIRMs, recently elaborated by De Boeck and Wilson (2004), is presented within the statistical framework of generalized linear mixed models. It is shown that the EIRM approach provides a powerful framework for both a psychometric and statistical analysis of group differences. This is contrasted with the more typical two-step approach, in which psychometric analysis (i.e., measurement) and statistical analysis (i.e., explanation) occur independently. The two approaches are each used to describe and explain racial/ethnic gaps on a standardized science test. It is shown that the EIRM approach results in estimated racial/ethnic achievement gaps that are larger than those found in the two-step approach. In addition, when science achievement is examined by subdomains, the magnitude of racial/ethnic gap estimates under the EIRM approach are more variable and sensitive to the inclusion of contextual variables. These differences stem from the fact that the EIRM approach allows for disattenuated estimates of group level parameters, whereas the two-step approach depends on estimates of science achievement that are shrunken as a function of measurement error.
引用
收藏
页码:89 / 118
页数:30
相关论文
共 50 条
  • [41] Analysis of longitudinal randomized clinical trials using item response models
    Glas, Cees A. W.
    Geerlings, Hanneke
    van de Laar, Mart A. F. J.
    Taal, Erik
    CONTEMPORARY CLINICAL TRIALS, 2009, 30 (02) : 158 - 170
  • [42] Using SAS PROC NLMIXED to fit item response theory models
    Ching-Fan Sheu
    Cheng-Te Chen
    Ya-Hui Su
    Wen-Chung Wang
    Behavior Research Methods, 2005, 37 : 202 - 218
  • [43] Integration of Automated Essay Scoring Models Using Item Response Theory
    Aomi, Itsuki
    Tsutsumi, Emiko
    Uto, Masaki
    Ueno, Maomi
    ARTIFICIAL INTELLIGENCE IN EDUCATION (AIED 2021), PT II, 2021, 12749 : 54 - 59
  • [44] Assessing Fit of Item Response Models Using the Information Matrix Test
    Ranger, Jochen
    Kuhn, Joerg-Tobias
    JOURNAL OF EDUCATIONAL MEASUREMENT, 2012, 49 (03) : 247 - 268
  • [45] Using Item Response Theory Models to Evaluate the Practice Environment Scale
    Raju, Dheeraj
    Su, Xiaogang
    Patrician, Patricia A.
    JOURNAL OF NURSING MEASUREMENT, 2014, 22 (02) : 323 - 341
  • [46] Information Matrix Test for Item Response Models Using Stochastic Approximation
    Han, Youngjin
    Liu, Yang
    Yang, Ji Seung
    MULTIVARIATE BEHAVIORAL RESEARCH, 2024, 59 (03) : 651 - 653
  • [47] Formulating multidimensional item response models using the SAS NLMIXED procedure
    Chen, CT
    Wang, WC
    Sheu, CF
    INTERNATIONAL JOURNAL OF PSYCHOLOGY, 2004, 39 (5-6) : 420 - 420
  • [48] DEVELOPMENT AND STANDARDIZATION OF ACHIEVEMENT TEST IN SENIOR SECONDARY SCHOOL MATHEMATICS USING ITEM RESPONSE THEORY
    Enunwah, Clement Ifeanyi
    Akwa, Ayang Mbeh
    EDULEARN14: 6TH INTERNATIONAL CONFERENCE ON EDUCATION AND NEW LEARNING TECHNOLOGIES, 2014, : 7424 - 7431
  • [49] ITEM RESPONSE THEORY USING A FINITE MIXTURE OF LOGISTIC MODELS WITH ITEM-SPECIFIC MIXING WEIGHTS
    Mori, Joji
    Kano, Yutaka
    JOURNAL JAPANESE SOCIETY OF COMPUTATIONAL STATISTICS, 2013, 26 (01): : 17 - 38
  • [50] On the Use of Factor-Analytic Multinomial Logit Item Response Models to Account for Individual Differences in Response Style
    Johnson, Timothy R.
    Bolt, Daniel M.
    JOURNAL OF EDUCATIONAL AND BEHAVIORAL STATISTICS, 2010, 35 (01) : 92 - 114