Quasi Markov Chain Monte Carlo Methods
Abstract
QuasiMonte Carlo (QMC) methods for estimating integrals are attractive since the resulting estimators typically converge at a faster rate than pseudorandom Monte Carlo. However, they can be difficult to set up on arbitrary posterior densities within the Bayesian framework, in particular for inverse problems. We introduce a general parallel Markov chain Monte Carlo (MCMC) framework, for which we prove a law of large numbers and a central limit theorem. In that context, nonreversible transitions are investigated. We then extend this approach to the use of adaptive kernels and state conditions, under which ergodicity holds. As a further extension, an importance sampling estimator is derived, for which asymptotic unbiasedness is proven. We consider the use of completely uniformly distributed (CUD) numbers within the above mentioned algorithms, which leads to a general parallel quasiMCMC (QMCMC) methodology. We prove consistency of the resulting estimators and demonstrate numerically that this approach scales close to $n^{2}$ as we increase parallelisation, instead of the usual $n^{1}$ that is typical of standard MCMC algorithms. In practical statistical models we observe multiple orders of magnitude improvement compared with pseudorandom methods.
 Publication:

arXiv eprints
 Pub Date:
 June 2018
 arXiv:
 arXiv:1807.00070
 Bibcode:
 2018arXiv180700070S
 Keywords:

 Mathematics  Statistics Theory;
 Mathematics  Probability