Journal
TRAC-TRENDS IN ANALYTICAL CHEMISTRY
Volume 25, Issue 11, Pages 1112-1124Publisher
ELSEVIER SCI LTD
DOI: 10.1016/j.trac.2006.10.010
Keywords
bootstrap; classification; confidence interval; Latin partition; prediction; validation
Categories
Ask authors/readers for more resources
Unbiased evaluation of classification and calibration methods is important, especially as these methods are applied to increasingly complex data sets that are under-determined. Precision bounds, such as confidence intervals, are required for interpreting any experimental result. Using bootstrapped Latin partitions to evaluate classification and calibration models, bounds on the average predictions were obtained. These bounds characterize sources of variation attributed to building the model and the composition of the training set with respect to the test set. Furthermore, precision bounds on the average of the model-variable loadings allow the significance of characteristic features to be estimated. The procedure for bootstrapped Latin partitions is given and demonstrated with synthetic data sets for classification using linear discriminant analysis and fuzzy rule-building expert systems, and for calibration using partial least squares regression with one and three properties. All analyses were implemented on a personal computer with the longest evaluation requiring 6-h processing time. Analysis of variance and matched sample t-tests were also used to demonstrate the statistical power of these tests. (c) 2006 Published by Elsevier Ltd.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available