English Learners and Constructed-Response Science Test Items Challenges and Opportunities

被引:1
|
作者
Noble, Tracy [1 ,4 ]
Wells, Craig S. [2 ]
Rosebery, Ann S. [3 ]
机构
[1] TERC, Cambridge, MA USA
[2] Univ Massachusetts Amherst, Amherst, MA USA
[3] Cheche Konnen Ctr, Cambridge, MA USA
[4] TERC, 2067 Massachusetts Ave, Cambridge, MA 02140 USA
关键词
Assessment; Constructed Response; Large-scale Assessment; LARGE-SCALE SCIENCE; LANGUAGE LEARNERS; LINGUISTIC COMPLEXITY; CULTURAL VALIDITY; ASSESSMENTS; PERFORMANCE; STUDENTS; PROGRESS; DEMANDS;
D O I
10.1080/10627197.2023.2226387
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
This article reports on two quantitative studies of English learners' (ELs) interactions with constructed-response items from a Grade 5 state science test. Study 1 investigated the relationships between the constructed-response item-level variables of English Reading Demand, English Writing Demand, and Background Knowledge Demand and the performance of ELs vs. non-ELs on those items. English Writing Demand was the strongest predictor of Differential Item Functioning favoring non-ELs over ELs for constructed-response items. In Study 2, we investigated the student-level variable of English language proficiency level and found that lower English language proficiency was related to greatly increased odds of omitting a response to a constructed-response item, even when controlling for science proficiency. These findings challenge the validity of scores on constructed-response test items as measures of ELs' science proficiency.
引用
收藏
页码:246 / 272
页数:27
相关论文
共 50 条
  • [1] Automated Scoring of Constructed-Response Science Items: Prospects and Obstacles
    Liu, Ou Lydia
    Brew, Chris
    Blackmore, John
    Gerard, Libby
    Madhok, Jacquie
    Linn, Marcia C.
    EDUCATIONAL MEASUREMENT-ISSUES AND PRACTICE, 2014, 33 (02) : 19 - 28
  • [2] A Multimedia Effect for Multiple-Choice and Constructed-Response Test Items
    Lindner, Marlit A.
    Schult, Johannes
    Mayer, Richard E.
    JOURNAL OF EDUCATIONAL PSYCHOLOGY, 2022, 114 (01) : 72 - 88
  • [3] Comparison of Selected-and Constructed-Response Items
    Li, Haiying
    ARTIFICIAL INTELLIGENCE IN EDUCATION: POSTERS AND LATE BREAKING RESULTS, WORKSHOPS AND TUTORIALS, INDUSTRY AND INNOVATION TRACKS, PRACTITIONERS AND DOCTORAL CONSORTIUM, PT II, 2022, 13356 : 362 - 366
  • [4] Gender differences for constructed-response mathematics items
    Pomplun, M
    Capps, L
    EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT, 1999, 59 (04) : 597 - 614
  • [5] Targeted Linguistic Simplification of Science Test Items for English Learners
    Noble, Tracy
    Sireci, Stephen G.
    Wells, Craig S.
    Kachchaf, Rachel R.
    Rosebery, Ann S.
    Wang, Yang Caroline
    AMERICAN EDUCATIONAL RESEARCH JOURNAL, 2020, 57 (05) : 2175 - 2209
  • [6] Automatic scoring of constructed-response items with latent semantic analysis
    Lenhard, Wolfgang
    Baier, Herbert
    Hoffmann, Joachim
    Schneider, Wolfgang
    DIAGNOSTICA, 2007, 53 (03): : 155 - 165
  • [7] Weighting Constructed-Response items in IRT-based exams
    Sykes, RC
    Hou, LL
    APPLIED MEASUREMENT IN EDUCATION, 2003, 16 (04) : 257 - 275
  • [8] The effects of test length and sample size on the reliability and equating of tests composed of constructed-response items
    Fitzpatrick, AR
    Yen, WM
    APPLIED MEASUREMENT IN EDUCATION, 2001, 14 (01) : 31 - 57
  • [9] How does the number of actions on constructed-response items relate to test-taking effort and performance?
    Ivanova, Militsa
    Michaelides, Michalis
    Eklof, Hanna
    EDUCATIONAL RESEARCH AND EVALUATION, 2020, 26 (5-6) : 252 - 274
  • [10] Use of Adjustment by Minimum Discriminant Information in Linking Constructed-Response Test Scores in the Absence of Common Items
    Lee, Yi-Hsuan
    Haberman, Shelby J.
    Dorans, Neil J.
    JOURNAL OF EDUCATIONAL MEASUREMENT, 2019, 56 (02) : 452 - 472