Assessing the Quality of Student-Generated Content at Scale: A Comparative Analysis of Peer-Review Models

被引:6
|
作者
Darvishi, Ali [1 ]
Khosravi, Hassan [1 ]
Rahimi, Afshin [1 ]
Sadiq, Shazia [1 ]
Gasevic, Dragan [2 ]
机构
[1] Univ Queensland, Sch Informat Technol & Elect Engn, Brisbane, Qld 4072, Australia
[2] Monash Univ, Fac Informat Technol, Melbourne, Vic 3800, Australia
来源
基金
澳大利亚研究理事会;
关键词
Reliability; Analytical models; Probabilistic logic; Crowdsourcing; Task analysis; Data models; Adaptation models; Consensus approaches; crowdsourcing in education; learnersourcing; learning analytics; peer review; FEEDBACK; SIMILARITY; FUTURE;
D O I
10.1109/TLT.2022.3229022
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Engaging students in creating learning resources has demonstrated pedagogical benefits. However, to effectively utilize a repository of student-generated content (SGC), a selection process is needed to separate high- from low-quality resources as some of the resources created by students can be ineffective, inappropriate, or incorrect. A common and scalable approach is to use a peer-review process where students are asked to assess the quality of resources authored by their peers. Given that judgments of students, as experts-in-training, cannot wholly be relied upon, a redundancy-based method is widely employed where the same assessment task is given to multiple students. However, this approach introduces a new challenge, referred to as the consensus problem: How can we assign a final quality to a resource given ratings by multiple students? To address this challenge, we investigate the predictive performance of 18 inference models across five well-established categories of consensus approaches for inferring the quality of SGC at scale. The analysis is based on the engagement of 2141 undergraduate students across five courses in creating 12 803 resources and 77 297 peer reviews. Results indicate that the quality of reviews is quite diverse, and students tend to overrate. Consequently, simple statistics such as mean and median fail to identify poor-quality resources. Findings further suggest that incorporating advanced probabilistic and text analysis methods to infer the reviewers' reliability and reviews' quality improves performance; however, there is still an evident need for instructor oversight and training of students to write compelling and reliable reviews.
引用
收藏
页码:106 / 120
页数:15
相关论文
共 38 条
  • [1] Technologies for student-generated work in a peer-led, peer-review instructional environment.
    Stewart, IC
    Coppola, BP
    ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY, 2002, 223 : U188 - U188
  • [2] Technologies for student-generated work in a peer-led, peer-review instructional environment.
    Coppola, BP
    ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY, 2002, 223 : U187 - U188
  • [3] Assessing the quality of a student-generated question repository
    Bates, Simon P.
    Galloway, Ross K.
    Riise, Jonathan
    Homer, Danny
    PHYSICAL REVIEW SPECIAL TOPICS-PHYSICS EDUCATION RESEARCH, 2014, 10 (02):
  • [4] Supporting peer evaluation of student-generated content: a study of three approaches
    Gyamfi, George
    Hanna, Barbara
    Khosravi, Hassan
    ASSESSMENT & EVALUATION IN HIGHER EDUCATION, 2022, 47 (07) : 1129 - 1147
  • [5] Assessing the quality of feedback in the peer-review process
    Dobele, A. R.
    HIGHER EDUCATION RESEARCH & DEVELOPMENT, 2015, 34 (05) : 853 - 868
  • [6] Quality Control Analysis of Student-Generated Pharmaceutical Capsules
    Bucholtz, Ehren C.
    French, Lisa M.
    Lavoie, Jaie P.
    Gaebelein, Claude J.
    JOURNAL OF CHEMICAL EDUCATION, 2010, 87 (10) : 1108 - 1109
  • [7] Student-generated content in college teaching: content quality, behavioural pattern and learning performance
    Yang, X.
    Guo, X.
    Yu, S.
    JOURNAL OF COMPUTER ASSISTED LEARNING, 2016, 32 (01) : 1 - 15
  • [8] Learnersourcing: Student-generated Content @ Scale 2nd Annual Workshop
    Moore, Steven
    Singh, Anjali
    Lu, Xinyi
    Jin, Hyoungwook
    Khosravi, Hassan
    Denny, Paul
    Brooks, Christopher
    Wang, Xu
    Kim, Juho
    Stamper, John
    PROCEEDINGS OF THE ELEVENTH ACM CONFERENCE ON LEARNING@SCALE, L@S 2024, 2024, : 559 - 562
  • [9] Analysis of quality of conference peer-review comments
    Albers, Michael J.
    WMSCI 2006: 10TH WORLD MULTI-CONFERENCE ON SYSTEMICS, CYBERNETICS AND INFORMATICS, VOL VII, PROCEEDINGS, 2006, : 390 - 391
  • [10] Assessing the Quality of Student-Generated Short Answer Questions Using GPT-3
    Moore, Steven
    Nguyen, Huy A.
    Bier, Norman
    Domadia, Tanvi
    Stamper, John
    EDUCATING FOR A NEW FUTURE: MAKING SENSE OF TECHNOLOGY-ENHANCED LEARNING ADOPTION, EC-TEL 2022, 2022, 13450 : 243 - 257