McCandless & Gustafson have just published an interesting paper that is available Early View at Statistics in Medicine. They compare a conventional Bayesian analysis to so called ‘Monte-Carlo sensitivity analysis’ for the problem of assessing sensitivity of an exposure effect to unmeasured confounding.

They consider a setup with binary outcome, some measured confounders, and a binary unmeasured confounder. They then assume a particular logistic regression model for the outcome conditional on exposure, measured confounders, and the unmeasured confounder. A logistic regression model is then assumed for the unmeasured confounder conditional on exposure and measured confounders. Prior distributions are then specified for the model parameters, which McCandless and Gustafson divide into ‘bias parameters’ and ‘remaining parameters’.

The conventional Bayesian analysis (BSA), which the authors implemented using STAN, then gives draws from the posterior distribution of all model parameters. In particular, they demonstrate that the posterior distribution for the ‘bias parameters’ differs from the priors, indicating that the observed data contain information about these parameters.

The Monte-Carlo sensitivity analysis (MCSA) is then performed. This involves repeating the following process many times:

- draw new values of the bias parameters from their prior distributions
- calculate estimate of exposure effect adjusted for the unmeasured confounder, taking into account the drawn bias parameters values from 1. and the naive estimated effect of exposure which ignores the unmeasured confounding
- sample a value of the adjusted effect from a normal distribution centred at the adjusted estimate in 2., with variance equal to the estimated sampling variance from the naive analysis

The resulting estimates are then used like a sample from the posterior, in order to form a point estimate and a 95% credible interval. The authors refer to a 2005 paper by Sander Greenland, stating that when the prior and posterior distributions of the bias parameters are approximately equal, MCSA and BSA should give similar results. McCandless & Gustafson demonstrate in the paper however that in the unmeasured confounding setting they consider, this is not the case, and credible intervals resulting from MCSA and BSA can be quite different, even when the priors used in both are the same. As such, they recommend against using MCSA, given BSA’s firm justification.

The preceding reminded me of Hogan, Daniels and Hu’s chapter on Bayesian sensitivity analysis in the Handbook of Missing Data Methodology. In this they give a formal definition for a sensitivity parameter, part of which includes the requirement that the observed data contain no information about the sensitivity parameter. The motivation for this is the idea that (one should set up the model so that there is a clear separation between parameters which are identifiable from the observed data and parameters which are not identifiable. The models considered by McCandless and Gustafson would appear to not adhere to this recommendation, since, as they demonstrate, the observed data do contain information about the bias parameters, and it is for this reason that the prior and posterior for the bias parameters are not the same (one of the requirements described by Greenland for MCSA to be approximately equivalent to BSA).

Given the increasing accessibility of software for performing Bayesian analyses, and the issues identified by McCandless and Gustafson, one might argue that it is always safest to just perform the Bayesian analysis, rather than go down the MCSA route.

Feb 2020 postscript – I have just been made aware that Sander Greenland subsequently wrote a letter to the editor about this paper, that readers might be interested to look at here.