4.6 Article

Advanced Crowdsourced Test Report Prioritization Based on Adaptive Strategy

期刊

IEEE ACCESS
卷 10, 期 -, 页码 53522-53532

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/ACCESS.2022.3176086

关键词

Task analysis; Greedy algorithms; Crowdsourcing; Software algorithms; Software testing; Encoding; Software; Crowdsourced software testing; test report prioritization; text classification

资金

  1. Special Fund for Military-Civilian Integration Development of Hebei Province [JMYB-2020-01]
  2. Key Project of Natural Science Research in Anhui Higher Education Institutions [KJ2019ZD67]

向作者/读者索取更多资源

Crowdsourced testing is becoming popular in software testing due to its efficiency in utilizing crowdsourcing and cloud platforms. This study focuses on prioritizing test reports in crowdsourced testing by adopting test case prioritization methods. The results demonstrate the effectiveness of these methods with an average APFD of over 0.8.
Crowdsourced testing is an emerging trend in software testing, which takes advantage of the efficiency of crowdsourced and cloud platforms. Crowdsourced testing has gradually been applied in many fields. In crowdsourced software testing, after the crowdsourced workers complete the test tasks, they submit the test results in test reports. Therefore, in crowdsourced software testing, checking a large number of test reports is an arduous but unavoidable software maintenance task. Crowdsourced test reports are numerous, complex, and need to be sorted to improve inspection efficiency. There are no systematic methods for prioritizing reports in crowdsourcing test report prioritization. However, in regression testing, test case prioritization technology has matured. Therefore, we migrate the test case prioritization method to crowdsourced test report prioritization and evaluate the effectiveness of these methods. We use natural language processing technology and word segmentation to process the text in the test reports. Then we use four methods to prioritize the reports: total greedy algorithm, additional greedy algorithm, genetic algorithm, and ART. The results show that these methods all perform well in prioritizing crowdsourced test reports, with an average APFD of more than 0.8.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据