4.8 Article

Maximizing the information learned from finite data selects a simple model

出版社

NATL ACAD SCIENCES
DOI: 10.1073/pnas.1715306115

关键词

effective theory; model selection; renormalization group; Bayesian prior choice; information theory

资金

  1. NIH [R01GM107103]
  2. National Science Foundation (NSF)-Energy, Power, and Control Networks [1710727]
  3. Lewis-Sigler Fellowship
  4. NSF Division of Physics [0957573]
  5. Narodowe Centrum Nauki Grant [2012/06/A/ST2/00396]
  6. Direct For Mathematical & Physical Scien
  7. Division Of Physics [0957573] Funding Source: National Science Foundation
  8. Div Of Electrical, Commun & Cyber Sys
  9. Directorate For Engineering [1710727] Funding Source: National Science Foundation

向作者/读者索取更多资源

We use the language of uninformative Bayesian prior choice to study the selection of appropriately simple effective models. We advocate for the prior which maximizes the mutual information between parameters and predictions, learning as much as possible from limited data. When many parameters are poorly constrained by the available data, we find that this prior puts weight only on boundaries of the parameter space. Thus, it selects a lower-dimensional effective theory in a principled way, ignoring irrelevant parameter directions. In the limit where there are sufficient data to tightly constrain any number of parameters, this reduces to the Jeffreys prior. However, we argue that this limit is pathological when applied to the hyperribbon parameter manifolds generic in science, because it leads to dramatic dependence on effects invisible to experiment.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.8
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据