Journal
JOURNAL OF SYSTEMS AND SOFTWARE
Volume 79, Issue 5, Pages 591-601Publisher
ELSEVIER SCIENCE INC
DOI: 10.1016/j.jss.2005.05.029
Keywords
testing effectiveness metric; quality measurement; adaptive random testing; random testing; software testing
Ask authors/readers for more resources
We examine the statistical variability of three commonly used software testing effectiveness measures-the E-measure (expected number of failures detected), P-measure (probability of detecting at least one failure), and F-measure (number of tests required to detect the first failure). We show that for random testing with replacement, the F-measure will be distributed according to the geometric distribution. A simulation study examines the distribution of two adaptive random testing methods, to investigate how closely their sampling distributions approximate the geometric distribution. One key observation is that in the worst case scenario, the sampling distribution of adaptive random testing is very similar to that of random testing. The E-measure and P-measure have a normal sampling distribution, but high variability, meaning that large sample sizes are required to obtain results with satisfactorily narrow confidence intervals. We illustrate this with a simulation study for the P-measure. Our results have reinforced, from a perspective other than empirical analysis, that adaptive random testing is a more effective alternative to random testing, with reference to the F-measure. We consider the implications of out-findings for previous studies conducted in the area, and make recommendations to future studies. (c) 2005 Elsevier Inc. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available