4.1 Article

Key concepts in model selection: Performance and generalizability

期刊

JOURNAL OF MATHEMATICAL PSYCHOLOGY
卷 44, 期 1, 页码 205-231

出版社

ACADEMIC PRESS INC
DOI: 10.1006/jmps.1999.1284

关键词

-

向作者/读者索取更多资源

What is model selection? What are the goals of model selection? What are the methods of model selection and how do they work? Which methods perform better than others and in what circumstances'! These questions rest on a number of key concepts in a relatively underdeveloped field. The aim of this paper is to explain some background concepts, to highlight some of the results in this special issue, and to add my own. The standard methods of model selection include classical hypothesis testing, maximum likelihood, Bayes method, minimum description length, cross-validation, and Akaike's information criterion. They ail provide an implementation of Occam's razor, in which parsimony or simplicity is balanced against goodness-of-fit. These methods primarily lake account of the sampling errors in parameter estimation, although their relative success at this task depends on the circumstances. However, the aim of model selection should also include the ability of a model to generalize to predictions in a different domain. Errors of extrapolation, or generalization, are different From errors of parameter estimation. So, it seems that simplicity and parsimony may be an additional factor in managing these errors, in which case the standard methods of model selection are incomplete implementations of Occam's razor. (C) 2000 Academic Press.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.1
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据