Cognitively Inspired Task Design to Improve User Performance on Crowdsourcing Platforms

被引:22
|
作者
Sampath, Harini Alagarai [1 ]
Rajeshuni, Rajeev [1 ]
Indurkhya, Bipin [1 ]
机构
[1] IIIT Hyderabad, Hyderabad, India
关键词
Crowdsourcing; Cognitive Psychology; Task Design; Visual Saliency; Working Memory; Mechanical Turk; Eye Tracking; SEARCH; MODEL;
D O I
10.1145/2556288.2557155
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Recent research in human computation has focused on improving the quality of work done by crowd workers on crowd-sourcing platforms. Multiple approaches have been adopted like filtering crowd workers through qualification tasks, and aggregating responses from multiple crowd workers to obtain consensus. We investigate here how improving the presentation of the task itself by using cognitively inspired features affects the performance of crowd workers. We illustrate this with a case-study for the task of extracting text from scanned images. We generated six task-presentation designs by modifying two parameters - visual saliency of the target fields and working memory requirements - and conducted experiments on Amazon Mechanical Turk (AMT) and with an eyetracker in the lab setting. Our results identify which task-design parameters (e.g. highlighting target fields) result in improved performance, and which ones do not (e.g. reducing the number of distractors). In conclusion, we claim that the use of cognitively inspired features for task design is a powerful technique for maximizing the performance of crowd workers.
引用
收藏
页码:3665 / 3674
页数:10
相关论文
共 50 条
  • [31] Pay It Backward: Per-Task Payments on Crowdsourcing Platforms Reduce Productivity
    Ikeda, Kazushi
    Bernstein, Michael S.
    [J]. 34TH ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2016, 2016, : 4111 - 4121
  • [32] Effect of Being Observed on the Reliability of Responses in Crowdsourcing Micro-task Platforms
    Naderi, Babak
    Wechsung, Ina
    Moeller, Sebastian
    [J]. 2015 SEVENTH INTERNATIONAL WORKSHOP ON QUALITY OF MULTIMEDIA EXPERIENCE (QOMEX), 2015,
  • [33] An End-to-End Deep RL Framework for Task Arrangement in Crowdsourcing Platforms
    Shan, Caihua
    Mamoulis, Nikos
    Cheng, Reynold
    Li, Guoliang
    Li, Xiang
    Qian, Yuqiu
    [J]. 2020 IEEE 36TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2020), 2020, : 49 - 60
  • [34] Combining Crowdsourcing and Learning to Improve Engagement and Performance
    Dontcheva, Mira
    Morris, Robert
    Brandt, Joel
    Gerber, Elizabeth M.
    [J]. 32ND ANNUAL ACM CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2014), 2014, : 3379 - 3388
  • [35] User experience-driven secure task assignment in spatial crowdsourcing
    Peng, Wei
    Liu, An
    Li, Zhixu
    Liu, Guanfeng
    Li, Qing
    [J]. WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2020, 23 (03): : 2131 - 2151
  • [36] User experience-driven secure task assignment in spatial crowdsourcing
    Wei Peng
    An Liu
    Zhixu Li
    Guanfeng Liu
    Qing Li
    [J]. World Wide Web, 2020, 23 : 2131 - 2151
  • [37] Design and evaluation of crowdsourcing platforms based on users' confidence judgments
    Ahmadabadi, Samin Nili
    Haghifam, Maryam
    Shah-Mansouri, Vahid
    Ershadmanesh, Sara
    [J]. SCIENTIFIC REPORTS, 2024, 14 (01):
  • [38] Harnessing Cognitively Inspired Predictive Models to Improve Investment Decision-Making
    Carandente, Vincenzo
    Sperli, Giancarlo
    [J]. COGNITIVE COMPUTATION, 2024, 16 (03) : 1237 - 1252
  • [39] Mechanism Design for Cross-Market Task Crowdsourcing
    Qiao, Yu
    Wu, Jun
    Zhang, Lei
    Wang, Chongjun
    [J]. PROCEEDINGS OF THE IEEE/ACM INTERNATIONAL SYMPOSIUM ON QUALITY OF SERVICE (IWQOS 2019), 2019,
  • [40] Sprout: Crowd-Powered Task Design for Crowdsourcing
    Bragg, Jonathan
    Mausam
    Weld, Daniel S.
    [J]. UIST 2018: PROCEEDINGS OF THE 31ST ANNUAL ACM SYMPOSIUM ON USER INTERFACE SOFTWARE AND TECHNOLOGY, 2018, : 165 - 176