Towards a Classification Model for Tasks in Crowdsourcing

被引:2
|
作者
Alabduljabbar, Reham [1 ]
Al-Dossari, Hmood [2 ]
机构
[1] King Saud Univ, Coll Comp & Informat Sci, Informat Technol Dept, Riyadh, Saudi Arabia
[2] King Saud Univ, Coll Comp & Informat Sci, Informat Syst Dept, Riyadh, Saudi Arabia
关键词
Crowdsourcing; Classification; Task; Amazon MTurk; Quality Control; SYSTEMS; MANAGEMENT; ISSUES;
D O I
10.1145/3018896.3018916
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Crowdsourcing is an increasingly popular approach for utilizing the power of the crowd in performing tasks that cannot be solved sufficiently by machines. Text annotation and image labeling are two examples of crowdsourcing tasks that are difficult to automate and human knowledge is often required. However, the quality of the obtained outcome from the crowdsourcing is still problematic. To obtain high-quality results, different quality control mechanisms should be applied to evaluate the different type of tasks. In a previous work, we present a task ontology-based model that can be utilized to identify which quality mechanism is most appropriate based on the task type. In this paper, we complement our previous work by providing a categorization of crowdsourcing tasks. That is, we define the most common task types in the crowdsourcing context. Then, we show how machine learning algorithms can be used to infer automatically the type of the crowdsourced task.
引用
收藏
页数:7
相关论文
共 50 条
  • [31] Mobile crowdsourcing: four experiments on platforms and tasks
    Della Mea, Vincenzo
    Maddalena, Eddy
    Mizzaro, Stefano
    DISTRIBUTED AND PARALLEL DATABASES, 2015, 33 (01) : 123 - 141
  • [32] Debiased Label Aggregation for Subjective Crowdsourcing Tasks
    Wallace, Shaun
    Cai, Tianyuan
    Le, Brendan
    Leiva, Luis A.
    EXTENDED ABSTRACTS OF THE 2022 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2022, 2022,
  • [33] Statistical Quality Estimation for General Crowdsourcing Tasks
    Baba, Yukino
    Kashima, Hisashi
    19TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING (KDD'13), 2013, : 554 - 562
  • [34] Mobile crowdsourcing: four experiments on platforms and tasks
    Vincenzo Della Mea
    Eddy Maddalena
    Stefano Mizzaro
    Distributed and Parallel Databases, 2015, 33 : 123 - 141
  • [35] Multistep planning for crowdsourcing complex consensus tasks
    Deng, Zixuan
    Xiang, Yanping
    KNOWLEDGE-BASED SYSTEMS, 2021, 231
  • [36] The Effects of Feedback and Goal on the Quality of Crowdsourcing Tasks
    Lim, Jae-Eun
    Lee, Joonhwan
    Kim, Dongwhan
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, 2021, 37 (13) : 1207 - 1219
  • [37] Trends on Crowdsourcing Java']JavaScript Small Tasks
    Zozas, Ioannis
    Anagnostou, Iason
    Bibi, Stamatia
    ENASE: PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON EVALUATION OF NOVEL APPROACHES TO SOFTWARE ENGINEERING, 2022, : 85 - 94
  • [38] Assessing Crowdsourcing Quality through Objective Tasks
    Aker, Ahmet
    El-Haj, Mahmoud
    Albakour, M-Dyaa
    Kruschwitz, Udo
    LREC 2012 - EIGHTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2012, : 1456 - 1461
  • [39] Sustainable Employment in India by Crowdsourcing Enterprise Tasks
    Roy, Shourya
    Balamurugan, Chithralekha
    Gujar, Sujit
    PROCEEDINGS OF THE 3RD ACM SYMPOSIUM ON COMPUTING FOR DEVELOPMENT (ACM DEV 2013), 2013,
  • [40] Optimal Assignment for Deadline Aware Tasks in the Crowdsourcing
    Bi, Ran
    Zheng, Xu
    Tan, Guozhen
    PROCEEDINGS OF 2016 IEEE INTERNATIONAL CONFERENCES ON BIG DATA AND CLOUD COMPUTING (BDCLOUD 2016) SOCIAL COMPUTING AND NETWORKING (SOCIALCOM 2016) SUSTAINABLE COMPUTING AND COMMUNICATIONS (SUSTAINCOM 2016) (BDCLOUD-SOCIALCOM-SUSTAINCOM 2016), 2016, : 178 - 184