I've recently been working on some simulation studies in R which involve computer intensive MCMC sampling. Ordinarily I would use my institution's computing cluster to do these, making use of the large number of computer cores, but a temporary lack of availability of this led me to investigate using Amazon's Web Services (AWS) system instead. In this post I'll describe the steps I went through to get my simulations going in R. As background, I am mainly a Windows user, and had never really used the Linux operating system. Nonetheless, the process wasn't actually too tricky to get going in the end, and it's enabled me to get the simulations completed far far more quickly than if I'd just used my desktop's 8 cores. The advantages of using a cloud computing resource (from my perspective) is that in principle you can use as little or as much computing power as you need or want, and it is always available - you don't have to compete against other user's demands, as would typically be the case on an academic institution's computer cluster.
For any users of my R package smcfcs, I've just released a new version (1.1.1), which along with a few small changes, includes a critical bug fix. The bug affected imputation of categorical (binary and categorical variables with more than two levels) when the substantive model is linear regression (other substantive model types were not affected). All users should update to the new version, which is available on CRAN.
In the process of organising a conference session on machine learning, I've finally got around to reading the late Leo Breiman's thought provoking 2001 Statistical Science article "Statistical Modeling: The Two Cultures". I highly recommend reading the paper, and the discussion that follows it. In the paper Breiman argues that statistics as a field should open its eyes to analysing data not only with traditional 'data models' (his terminology), by which he means standard (usually parametric) probabilistic models, but to also make much more use of so called machine learning algorithmic techniques.
A concern when analysing data with missing values is that the missing at random (MAR) assumption, upon which a number of methods rely, does not hold. When the missing at random assumption is in doubt, ideally we should perform sensitivity analyses, whereby we assess how sensitive our conclusions are to plausible deviations from MAR. One route to performing such a sensitivity analysis, which is convenient if one has already performed multiple imputation (assuming MAR), is the weighting method proposed by Carpenter et al in 2007. This involves applying a weighted version of Rubin's rules to the parameter estimates obtained from the MAR imputations, with the weight given to a particular imputation estimate depending on how plausible the imputations in that dataset are with an assumed missing not at random (MNAR) mechanism. The method is appealing because, computationally, it requires relatively little additional effort once MAR imputations have been generated.
In an important paper just published by Rezvan et al in BMC Medical Research Methodology, the performance of this weighting method has been explored through a series of simulation studies. In summary, they find that the method does not recover unbiased estimates, even when the number impuations used is large, when the correct (true) value of the MNAR sensitivity parameter is used. The paper explains in detail possible reasons for the failure of the method, but the summary conclusion is that the weighting method ought not to be used for performing MNAR sensitivity analyses after MAR multiple imputation.
What might one do as an alternative? One is to perform the selection modelling MNAR sensitivity analysis using software such as WinBUGS or JAGS, in which the substantive model and selection (missingness) model are jointly fitted, and one uses an informative prior for the sensitivity parameter. A further alternative, which like the weighting approach can (in certain situations) exploit multiple imputations generated assuming MAR, is the pattern mixture approach, whereby the MAR imputations are modified to reflect an assumed MNAR mechanism. The modified imputations can then be analysed and results combined using Rubin's rules in the usual way.
Today I gave a seminar at the Centre for Biostatistics, University of Manchester, as part of a three seminar afternoon on missing data. My talk described recent work on methods for handling missing covariates in competing risks analysis, with a focus on when complete case analysis is valid and on multiple imputation approaches. For the latter, our substantive model compatible adaptation of fully conditional specification now supports competing risks analysis, both in R and Stata (see here).
The slides of my talk are available here.
Update 13th May 2016: the corresponding paper is now available (open access) here.
In 2007, Paul von Hippel published a nice paper proposing a variant of the conventional multiple imputation (MI) approach to handling missing data. The paper advocated a multiple imputation followed by deletion (MID) approach. The context considered was where we are interested in fitting a regression model for an outcome Y with covariates X, and some Y and X values are missing. The approach advocated consists of running imputation as usual, imputing missing values in Y and X, but then discarding those records where the outcome Y had been imputed. Instead, the reduced datasets, with missing X values imputed but only observed Y values, are analysed as usual, with results combined using Rubin's rules.