Who Should Be Selected to Perform a Task in Crowdsourced Testing?
被引:23
|
作者:
Cui, Qiang
论文数: 0引用数: 0
h-index: 0
机构:
Chinese Acad Sci, Inst Software, Lab Internet Software Technol, Beijing, Peoples R China
Univ Chinese Acad Sci, Beijing, Peoples R ChinaChinese Acad Sci, Inst Software, Lab Internet Software Technol, Beijing, Peoples R China
Cui, Qiang
[1
,4
]
Wang, Junjie
论文数: 0引用数: 0
h-index: 0
机构:
Chinese Acad Sci, Inst Software, Lab Internet Software Technol, Beijing, Peoples R ChinaChinese Acad Sci, Inst Software, Lab Internet Software Technol, Beijing, Peoples R China
Wang, Junjie
[1
]
Yang, Guowei
论文数: 0引用数: 0
h-index: 0
机构:
Texas State Univ, Dept Comp Sci, San Marcos, TX USAChinese Acad Sci, Inst Software, Lab Internet Software Technol, Beijing, Peoples R China
Yang, Guowei
[2
]
Xie, Miao
论文数: 0引用数: 0
h-index: 0
机构:
Chinese Acad Sci, Inst Software, Lab Internet Software Technol, Beijing, Peoples R China
Univ Chinese Acad Sci, Beijing, Peoples R ChinaChinese Acad Sci, Inst Software, Lab Internet Software Technol, Beijing, Peoples R China
Xie, Miao
[1
,4
]
Wang, Qing
论文数: 0引用数: 0
h-index: 0
机构:
Chinese Acad Sci, Inst Software, Lab Internet Software Technol, Beijing, Peoples R China
Chinese Acad Sci, Inst Software, State Key Lab Comp Sci, Beijing, Peoples R China
Univ Chinese Acad Sci, Beijing, Peoples R ChinaChinese Acad Sci, Inst Software, Lab Internet Software Technol, Beijing, Peoples R China
Wang, Qing
[1
,3
,4
]
Li, Mingshu
论文数: 0引用数: 0
h-index: 0
机构:
Chinese Acad Sci, Inst Software, Lab Internet Software Technol, Beijing, Peoples R China
Chinese Acad Sci, Inst Software, State Key Lab Comp Sci, Beijing, Peoples R China
Univ Chinese Acad Sci, Beijing, Peoples R ChinaChinese Acad Sci, Inst Software, Lab Internet Software Technol, Beijing, Peoples R China
Li, Mingshu
[1
,3
,4
]
机构:
[1] Chinese Acad Sci, Inst Software, Lab Internet Software Technol, Beijing, Peoples R China
[2] Texas State Univ, Dept Comp Sci, San Marcos, TX USA
[3] Chinese Acad Sci, Inst Software, State Key Lab Comp Sci, Beijing, Peoples R China
[4] Univ Chinese Acad Sci, Beijing, Peoples R China
Crowdsourced testing is an emerging trend in software testing, which relies on crowd workers to accomplish test tasks. Due to the cost constraint, a test task usually involves a limited number of crowd workers. Furthermore, more workers does not necessarily result in detecting more bugs. Different workers, who may have different testing experience and expertise, may make much differences in the test outcomes. For example, some inappropriate workers may miss true bug, introduce false bugs or report duplicated bugs, which decreases the test quality. In current practice, a test task is usually dispatched in a random manner, and the quality of testing cannot be guaranteed. Therefore, it is important to select an appropriate subset of workers to perform a test task to ensure high bug detection rate. This paper introduces ExReDiv, a novel hybrid approach to select a set of workers for a test task. It consists of three key strategies: the experience strategy selects experienced workers; the relevance strategy selects workers with expertise relevant to the given test task; the diversity strategy selects diverse workers to avoid detecting duplicated bugs. We evaluate ExReDiv based on 42 test tasks from one of the largest crowdsourced testing platforms in China, and the experimental results show its effectiveness.