Measuring the Quality of Annotations for a Subjective Crowdsourcing Task

被引:3
|
作者
Justo, Raquel [1 ]
Ines Torres, M. [1 ]
Alcaide, Jose M. [1 ]
机构
[1] Univ Pais Vasco UPV EHU, Sarriena S-N, Leioa 48940, Spain
关键词
Supervised learning; Annotation; Crowdsourcing; Subjective language; AGREEMENT;
D O I
10.1007/978-3-319-58838-4_7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work an algorithm devoted to the detection of low quality annotations is proposed. It is mainly focused on subjective annotation tasks carried out by means of crowdsourcing platforms. In this kind of task, where a good response is not necessarily prefixed, several measures should be considered in order to pick the different behaviours of annotators associated to bad quality results: time, inter-annotator agreement and repeated patterns in responses. The proposed algorithm considers all these measures and provide a set of workers whose annotations should be removed. The experiments carried out, over a sarcasm annotation task, show that once the low quality annotations were removed and acquired again a better labeled set was achieved.
引用
收藏
页码:58 / 68
页数:11
相关论文
共 50 条
  • [1] CROWDSOURCING SUBJECTIVE IMAGE QUALITY EVALUATION
    Ribeiro, Flavio
    Florencio, Dinei
    Nascimento, Vtor
    2011 18TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2011,
  • [2] Subjective Quality Evaluations Using Crowdsourcing
    Salas, Oscar Figuerola
    Adzic, Velibor
    Kalva, Hari
    2013 PICTURE CODING SYMPOSIUM (PCS), 2013, : 418 - 421
  • [3] Intra- and Inter-rater Agreement in a Subjective Speech Quality Assessment Task in Crowdsourcing
    Jimenez, Rafael Zequeira
    Llagostera, Anna
    Naderi, Babak
    Moeller, Sebastian
    Berger, Jens
    COMPANION OF THE WORLD WIDE WEB CONFERENCE (WWW 2019 ), 2019, : 1138 - 1143
  • [4] Modeling Subjective Affect Annotations with Multi-Task Learning
    Hayat, Hassan
    Ventura, Carles
    Lapedriza, Agata
    SENSORS, 2022, 22 (14)
  • [5] Crowdsourcing Discourse Relation Annotations by a Two-Step Connective Insertion Task
    Yung, Frances
    Scholman, Merel C. J.
    Demberg, Vera
    13TH LINGUISTIC ANNOTATION WORKSHOP (LAW XIII), 2019, : 16 - 25
  • [6] Task Assignment with Guaranteed Quality for Crowdsourcing Platforms
    Yin, Xiaoyan
    Chen, Yanjiao
    Li, Baochun
    2017 IEEE/ACM 25TH INTERNATIONAL SYMPOSIUM ON QUALITY OF SERVICE (IWQOS), 2017,
  • [7] A Transcription Task for Crowdsourcing with Automatic Quality Control
    Lee, Chia-ying
    Glass, James
    12TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2011 (INTERSPEECH 2011), VOLS 1-5, 2011, : 3048 - 3051
  • [8] Crowdsourcing Scholarly Discourse Annotations
    Oelen, Allard
    Stocker, Markus
    Auer, Soeren
    IUI '21 - 26TH INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES, 2021, : 464 - 474
  • [9] Explainable Modeling of Annotations in Crowdsourcing
    Nguyen, An T.
    Lease, Matthew
    Wallace, Byron C.
    PROCEEDINGS OF IUI 2019, 2019, : 575 - 579
  • [10] Measuring of subjective quality of life
    Sores, Anett
    Peto, Karoly
    EMERGING MARKETS QUERIES IN FINANCE AND BUSINESS 2014, EMQFB 2014, 2015, 32 : 809 - 816