Why increasing the number of raters only helps sometimes: Reliability and validity of peer assessment across tasks of different complexity

被引:5
|
作者
Tong, Yimin [1 ]
Schunn, Christian D. [2 ]
Wang, Hong [1 ]
机构
[1] Dalian Univ Technol, Sch Foreign Languages, 2 Linggong Rd, Dalian 116024, Peoples R China
[2] Univ Pittsburgh, Learning Res & Dev Ctr, 3420 Forbes Ave, Pittsburgh, PA 15260 USA
关键词
Validity; Reliability; Number of raters; Task complexity; Peer assessment; METAANALYSIS COMPARING PEER; STUDENTS PERCEPTIONS; HIGHER-EDUCATION; ASSESSMENT-TOOL; FEEDBACK; TEACHER; IMPACT; 2ND-LANGUAGE; PERFORMANCE; AGREEMENT;
D O I
10.1016/j.stueduc.2022.101233
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Number of raters is theoretically central to peer assessment reliability and validity, yet rarely studied. Further, requiring each student to assess more peers' documents both increases the number of evaluations per document but also assessor workload, which can decline performance. Moreover, task complexity is likely a moderating factor, influencing both workload and validity. This study examined whether changing the number of required peer assessments per student / number of raters per document affected peer assessment reliability and validity for tasks at different levels of task complexity. 181 students completed and provided peer assessments for tasks at three levels of task complexity: low complexity (dictation), medium complexity (oral imitation), and high complexity (writing). Adequate validity of peer assessments was observed for all three task complexities at low reviewing loads. However, the impacts of increasing reviewing load varied by reliability vs. validity outcomes and by task complexity.
引用
收藏
页数:8
相关论文
empty
未找到相关数据