4.6 Article

Advanced Crowdsourced Test Report Prioritization Based on Adaptive Strategy

Journal

IEEE ACCESS
Volume 10, Issue -, Pages 53522-53532

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2022.3176086

Keywords

Task analysis; Greedy algorithms; Crowdsourcing; Software algorithms; Software testing; Encoding; Software; Crowdsourced software testing; test report prioritization; text classification

Funding

  1. Special Fund for Military-Civilian Integration Development of Hebei Province [JMYB-2020-01]
  2. Key Project of Natural Science Research in Anhui Higher Education Institutions [KJ2019ZD67]

Ask authors/readers for more resources

Crowdsourced testing is becoming popular in software testing due to its efficiency in utilizing crowdsourcing and cloud platforms. This study focuses on prioritizing test reports in crowdsourced testing by adopting test case prioritization methods. The results demonstrate the effectiveness of these methods with an average APFD of over 0.8.
Crowdsourced testing is an emerging trend in software testing, which takes advantage of the efficiency of crowdsourced and cloud platforms. Crowdsourced testing has gradually been applied in many fields. In crowdsourced software testing, after the crowdsourced workers complete the test tasks, they submit the test results in test reports. Therefore, in crowdsourced software testing, checking a large number of test reports is an arduous but unavoidable software maintenance task. Crowdsourced test reports are numerous, complex, and need to be sorted to improve inspection efficiency. There are no systematic methods for prioritizing reports in crowdsourcing test report prioritization. However, in regression testing, test case prioritization technology has matured. Therefore, we migrate the test case prioritization method to crowdsourced test report prioritization and evaluate the effectiveness of these methods. We use natural language processing technology and word segmentation to process the text in the test reports. Then we use four methods to prioritize the reports: total greedy algorithm, additional greedy algorithm, genetic algorithm, and ART. The results show that these methods all perform well in prioritizing crowdsourced test reports, with an average APFD of more than 0.8.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available