期刊
JOURNAL OF SYSTEMS AND SOFTWARE
卷 192, 期 -, 页码 -出版社
ELSEVIER SCIENCE INC
DOI: 10.1016/j.jss.2022.111391
关键词
Information Theory; Mutual information; Finite State Machines
This paper introduces the use of Mutual Information as a measure for generating test suites and presents a new algorithm that utilizes Biased Mutual Information. Experimental results show that this approach is more effective in generating test suites with high fault finding capability compared to other measures and methods. It allows for a compromise between fault detection and execution time.
Mutual Information is an information theoretic measure designed to quantify the amount of similarity between two random variables ranging over two sets. In recent work we have use it as a base for a measure, called Biased Mutual Information, to guide the selection of a test suite among different possibilities. In this paper, we adapt this concept and show how it can be used to address the problem of generating a test suite with high fault finding capability, in a black-box scenario and following a maximise diversity approach. Additionally, we present a new Grammar-Guided Genetic Programming Algorithm that uses Biased Mutual Information to guide the generation of such test suites. Our experimental results clearly show the potential value of our measure when used to generate test suites. Moreover, they show that our measure is better in guiding test generation than current state-of-the-art measures, like Test Set Diameter (TSDm) measures. Additionally, we compared our proposal with classical completeness-oriented methods, like the H-Method and the Transition Tour method, and found that our proposal produces smaller test suites with high enough fault finding capability. Therefore, our methodology is preferable in an scenario where a compromise is necessary between fault detection and execution time.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据