4.6 Article Proceedings Paper

Weak convergence and optimal tuning of the reversible jump algorithm

期刊

MATHEMATICS AND COMPUTERS IN SIMULATION
卷 161, 期 -, 页码 32-51

出版社

ELSEVIER SCIENCE BV
DOI: 10.1016/j.matcom.2018.06.007

关键词

Markov chain Monte Carlo methods; Metropolis-Hastings algorithms; Model selection; Optimal scaling; Random walk Metropolis algorithms

资金

  1. NSERC (Natural Sciences and Engineering Research Council of Canada)
  2. FRQNT (Le Fonds de recherche du Quebec - Nature et technologies)
  3. SOA (Society of Actuaries)

向作者/读者索取更多资源

The reversible jump algorithm is a useful Markov chain Monte Carlo method introduced by Green (1995) that allows switches between subspaces of differing dimensionality, and therefore, model selection. Although this method is now increasingly used in key areas (e.g. biology and finance), it remains a challenge to implement it. In this paper, we focus on a simple sampling context in order to obtain theoretical results that lead to an optimal tuning procedure for the considered reversible jump algorithm, and consequently, to easy implementation. The key result is the weak convergence of the sequence of stochastic processes engendered by the algorithm. It represents the main contribution of this paper as it is, to our knowledge, the first weak convergence result for the reversible jump algorithm. The sampler updating the parameters according to a random walk, this result allows to retrieve the well-known 0.234 rule for finding the optimal scaling. It also leads to an answer to the question: with what probability should a parameter update be proposed comparatively to a model switch at each iteration? Crown Copyright (C) 2018 Published by Elsevier B.V. on behalf of International Association for Mathematics and Computers in Simulation (IMACS). All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据