Effects of objective and subjective competence on the reliability of crowdsourced relevance judgments

被引:0
|
作者
Samimi, Parnia
Ravana, Sri Devi [1 ]
Webber, William
Koh, Yun Sing [2 ]
机构
[1] Univ Malaya, Dept Informat Syst, Kuala Lumpur, Malaysia
[2] Univ Auckland, Dept Comp Sci, Auckland, New Zealand
关键词
ASSESSMENTS; SYSTEMS; QUALITY; SEARCH;
D O I
暂无
中图分类号
G25 [图书馆学、图书馆事业]; G35 [情报学、情报工作];
学科分类号
1205 ; 120501 ;
摘要
Introduction. Despite the popularity of crowdsourcing, the reliability of crowdsourced output has been questioned since crowdsourced workers display varied degrees of attention, ability and accuracy. It is important, therefore, to understand the factors that affect the reliability of crowdsourcing. In the context of producing relevance judgments, crowdsourcing has been recently proposed as an alternative approach to traditional methods of information retrieval evaluation, which are mostly expensive and scale poorly. Aim. The purpose of this study is to measure various cognitive characteristics of crowdsourced workers, and explore the effect that these characteristics have upon judgment reliability, as measured against a human gold standard. Method. The authors examined whether workers with high verbal comprehension skill could outperform workers with low verbal comprehension skill in terms of judgment reliability in crowdsourcing. Results. A significant correlation was found between judgment reliability and measured verbal comprehension skill, as well as with self-reported difficulty of judgment and confidence in the task. Surprisingly, however, there is no correlation between level of self-reported topic knowledge and reliability. Conclusions. Our findings show that verbal comprehension skill influences the accuracy of the relevance judgments created by the crowdsourced workers.
引用
收藏
页数:21
相关论文
共 50 条