4.6 Review

Marginal Likelihood Computation for Model Selection and Hypothesis Testing: An Extensive Review

期刊

SIAM REVIEW
卷 65, 期 1, 页码 3-58

出版社

SIAM PUBLICATIONS
DOI: 10.1137/20M1310849

关键词

marginal likelihood; Bayesian evidence; numerical integration; model selection; hypothe-sis testing; quadrature rules; doubly intractable posteriors; partition functions

向作者/读者索取更多资源

This article provides an up-to-date introduction and overview of marginal likelihood computation for model selection and hypothesis testing. The computation of normalizing constants of probability models is a fundamental issue in various fields such as statistics, applied mathematics, signal processing, and machine learning. The article comprehensively discusses the state of the art in this area, including limitations, benefits, connections, and differences among different techniques. It also addresses problems and possible solutions related to the use of improper priors, and compares relevant methodologies through theoretical comparisons and numerical experiments.
This is an up-to-date introduction to, and overview of, marginal likelihood computation for model selection and hypothesis testing. Computing normalizing constants of probability models (or ratios of constants) is a fundamental issue in many applications in statistics, applied mathematics, signal processing, and machine learning. This article provides a comprehensive study of the state of the art of the topic. We highlight limitations, bene-fits, connections, and differences among the different techniques. Problems and possible solutions with the use of improper priors are also described. Some of the most relevant methodologies are compared through theoretical comparisons and numerical experiments.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据