4.6 Article

Reliability and validity in comparative studies of software prediction models

期刊

IEEE TRANSACTIONS ON SOFTWARE ENGINEERING
卷 31, 期 5, 页码 380-391

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TSE.2005.58

关键词

software metrics; cost estimation; cross-validation; empirical methods; arbitrary function approximators; machine learning; estimation by analogy; regression analysis; simulation; reliability; validity; accuracy indicators

向作者/读者索取更多资源

Empirical studies on software prediction models do not converge with respect to the question which prediction model is best? The reason for this lack of convergence is poorly understood. In this simulation study, we have examined a frequently used research procedure comprising three main ingredients: a single data sample, an accuracy indicator, and cross validation. Typically, these empirical studies compare a machine learning model with a regression model. In our study, we use simulation and compare a machine learning and a regression model. The results suggest that it is the research procedure itself that is unreliable. This lack of reliability may strongly contribute to the lack of convergence. Our findings thus cast some doubt on the conclusions of any study of competing software prediction models that used this research procedure as a basis of model comparison. Thus, we need to develop more reliable research procedures before we can have confidence in the conclusions of comparative studies of software prediction models.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据