McCandless & Gustafson have just published an interesting paper that is available Early View at Statistics in Medicine. They compare a conventional Bayesian analysis to so called ‘Monte-Carlo sensitivity analysis’ for the problem of assessing sensitivity of an exposure effect to unmeasured confounding.
Bayesian inference
Automatic convergence checking in Bayesian inference with runjags
I’ve been performing some simulation studies comparing a Bayesian to a more traditional frequentist estimation approach in a particular problem. To do this I’ve been using the excellent JAGS package, calling it from R. One of the issues I’ve faced is the question of how long to run the MCMC sampler in the Bayesian approach. Use too few iterations, and the chains will not have converged to their stationary distribution, such that the samples will not be from the posterior distribution of the model parameters. In regular data analysis situations, one can make use of the extensive diagnostic toolkit which has been developed over the years. The most popular of these is I believe to examine trace plots from multiple chains, started with dispersed initial values, and also Gelman and Rubin’s Rhat measure.
My first foray with Stan
A number of people have mentioned Stan recently to me. Stan fits probability models to data using the Bayesian approach to statistical inference. WinBUGS was the first package to really allow users to fit complex, user defined models with Bayesian methods. As far as I understand, Stan’s strongest selling points are that it is fast, because it compiles your model into C++ code, and because of the clever sampling methods it implements (for more on this, see the sub-section in Stan’s manual).