Using Automated Essay Scores as an Anchor When Equating Constructed Response Writing Tests

被引:10
|
作者
Almond, Russell G. [1 ]
机构
[1] Florida State Univ, Dept Educ Psychol & Learning Syst, Tallahassee, FL 32306 USA
关键词
automated essay scoring; constructed response; equating; writing assessment;
D O I
10.1080/15305058.2013.816309
中图分类号
C [社会科学总论];
学科分类号
03 ; 0303 ;
摘要
Assessments consisting of only a few extended constructed response items (essays) are not typically equated using anchor test designs as there are typically too few essay prompts in each form to allow for meaningful equating. This article explores the idea that output from an automated scoring program designed to measure writing fluency (a common objective of many writing prompts) can be used in place of a more traditional anchor. The linear-logistic equating method used in this article is a variant of the Tucker linear equating method appropriate for the limited score range typical of essays. The procedure is applied to historical data. Although the procedure only results in small improvements over identity equating (not equating prompts), it does produce a viable alternative, and a mechanism for checking that the identity equating is appropriate. This may be particularly useful for measuring rater drift or equating mixed format tests.
引用
收藏
页码:73 / 91
页数:19
相关论文
共 29 条