4.6 Article

Characterizing Crowds to Better Optimize Worker Recommendation in Crowdsourced Testing

Journal

IEEE TRANSACTIONS ON SOFTWARE ENGINEERING
Volume 47, Issue 6, Pages 1259-1276

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TSE.2019.2918520

Keywords

Crowdsourced testing; crowd worker recommendation; multi-objective optimization

Funding

  1. National Key Research and Development Program of China [2018YFB1403400]
  2. National Natural Science Foundation of China [61602450, 61432001]
  3. China Scholarship Council

Ask authors/readers for more resources

Crowdsourced testing is a new trend where test tasks are assigned to online crowd workers. A new recommendation approach, Multi-Objective Crowd wOrker recoMmendation (MOCOM), aims to recommend a minimum number of crowd workers who can detect the maximum number of bugs in a crowdsourced testing task by maximizing bug detection probability, relevance, diversity, and minimizing test cost. Experimental evaluations show that MOCOM outperforms common baselines, reduces duplicate reports, and recommends workers with higher relevance and bug detection probability, leading to more bugs found with fewer workers.
Crowdsourced testing is an emerging trend, in which test tasks are entrusted to the online crowd workers. Typically, a crowdsourced test task aims to detect as many bugs as possible within a limited budget. However not all crowd workers are equally skilled at finding bugs; Inappropriate workers may miss bugs, or report duplicate bugs, while hiring them requires nontrivial budget. Therefore, it is of great value to recommend a set of appropriate crowd workers for a test task so that more software bugs can be detected with fewer workers. This paper first presents a new characterization of crowd workers and characterizes them with testing context, capability, and domain knowledge. Based on the characterization, we then propose Multi-Objective Crowd wOrker recoMmendation approach (MOCOM), which aims at recommending a minimum number of crowd workers who could detect the maximum number of bugs for a crowdsourced testing task. Specifically, MOCOM recommends crowd workers by maximizing the bug detection probability of workers, the relevance with the test task, the diversity of workers, and minimizing the test cost. We experimentally evaluate MOCOM on 532 test tasks, and results show that MOCOM significantly outperforms five commonly-used and state-of-the-art baselines. Furthermore, MOCOM can reduce duplicate reports and recommend workers with high relevance and larger bug detection probability; because of this it can find more bugs with fewer workers.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available