McCandless & Gustafson have just published an interesting paper that is available Early View at Statistics in Medicine. They compare a conventional Bayesian analysis to so called 'Monte-Carlo sensitivity analysis' for the problem of assessing sensitivity of an exposure effect to unmeasured confounding.
I've been performing some simulation studies comparing a Bayesian to a more traditional frequentist estimation approach in a particular problem. To do this I've been using the excellent JAGS package, calling it from R. One of the issues I've faced is the question of how long to run the MCMC sampler in the Bayesian approach. Use too few iterations, and the chains will not have converged to their stationary distribution, such that the samples will not be from the posterior distribution of the model parameters. In regular data analysis situations, one can make use of the extensive diagnostic toolkit which has been developed over the years. The most popular of these is I believe to examine trace plots from multiple chains, started with dispersed initial values, and also Gelman and Rubin's Rhat measure.
A number of people have mentioned Stan recently to me. Stan fits probability models to data using the Bayesian approach to statistical inference. WinBUGS was the first package to really allow users to fit complex, user defined models with Bayesian methods. As far as I understand, Stan's strongest selling points are that it is fast, because it compiles your model into C++ code, and because of the clever sampling methods it implements (for more on this, see the sub-section in Stan's manual).
Yesterday I had an interesting discussion with a friend about how parameters are thought of in Bayesian inference. Coming from a predominantly frequentist statistical education, I had somewhere along the line picked up the notion that for Bayesians, like frequentists, the model parameters (their true values) are unknown but fixed quantities. The prior distribution then represents the prior belief about the location of this fixed value, before the data are seen. Thus the prior distribution represents our uncertainty about the location of the unknown, but fixed, parameter value.