smcfcs in R - updated version 1.1.1 with critical bug fix

For any users of my R package smcfcs, I've just released a new version (1.1.1), which along with a few small changes, includes a critical bug fix. The bug affected imputation of categorical (binary and categorical variables with more than two levels) when the substantive model is linear regression (other substantive model types were not affected). All users should update to the new version, which is available on CRAN.

Machine learning vs. traditional modelling techniques

In the process of organising a conference session on machine learning, I've finally got around to reading the late Leo Breiman's thought provoking 2001 Statistical Science article "Statistical Modeling: The Two Cultures". I highly recommend reading the paper, and the discussion that follows it. In the paper Breiman argues that statistics as a field should open its eyes to analysing data not only with traditional 'data models' (his terminology), by which he means standard (usually parametric) probabilistic models, but to also make much more use of so called machine learning algorithmic techniques.

Read more

Weighting after multiple imputation for MNAR sensitivity analysis not recommended

A concern when analysing data with missing values is that the missing at random (MAR) assumption, upon which a number of methods rely, does not hold. When the missing at random assumption is in doubt, ideally we should perform sensitivity analyses, whereby we assess how sensitive our conclusions are to plausible deviations from MAR. One route to performing such a sensitivity analysis, which is convenient if one has already performed multiple imputation (assuming MAR), is the weighting method proposed by Carpenter et al in 2007. This involves applying a weighted version of Rubin's rules to the parameter estimates obtained from the MAR imputations, with the weight given to a particular imputation estimate depending on how plausible the imputations in that dataset are with an assumed missing not at random (MNAR) mechanism. The method is appealing because, computationally, it requires relatively little additional effort once MAR imputations have been generated.

In an important paper just published by Rezvan et al in BMC Medical Research Methodology, the performance of this weighting method has been explored through a series of simulation studies. In summary, they find that the method does not recover unbiased estimates, even when the number impuations used is large, when the correct (true) value of the MNAR sensitivity parameter is used. The paper explains in detail possible reasons for the failure of the method, but the summary conclusion is that the weighting method ought not to be used for performing MNAR sensitivity analyses after MAR multiple imputation.

What might one do as an alternative? One is to perform the selection modelling MNAR sensitivity analysis using software such as WinBUGS or JAGS, in which the substantive model and selection (missingness) model are jointly fitted, and one uses an informative prior for the sensitivity parameter. A further alternative, which like the weighting approach can (in certain situations) exploit multiple imputations generated assuming MAR, is the pattern mixture approach, whereby the MAR imputations are modified to reflect an assumed MNAR mechanism. The modified imputations can then be analysed and results combined using Rubin's rules in the usual way.

Missing covariates in competing risks analysis

Today I gave a seminar at the Centre for Biostatistics, University of Manchester, as part of a three seminar afternoon on missing data. My talk described recent work on methods for handling missing covariates in competing risks analysis, with a focus on when complete case analysis is valid and on multiple imputation approaches. For the latter, our substantive model compatible adaptation of fully conditional specification now supports competing risks analysis, both in R and Stata (see here).

The slides of my talk are available here.

Update 13th May 2016: the corresponding paper is now available (open access) here.

Multiple imputation followed by deletion of imputed outcomes

In 2007, Paul von Hippel published a nice paper proposing a variant of the conventional multiple imputation (MI) approach to handling missing data. The paper advocated a multiple imputation followed by deletion (MID) approach. The context considered was where we are interested in fitting a regression model for an outcome Y with covariates X, and some Y and X values are missing. The approach advocated consists of running imputation as usual, imputing missing values in Y and X, but then discarding those records where the outcome Y had been imputed. Instead, the reduced datasets, with missing X values imputed but only observed Y values, are analysed as usual, with results combined using Rubin's rules.

Read more

Using hazard ratios to estimate causal effects in RCTs

Odd Aalen and colleagues have recently published an interesting paper on the use of Cox models for estimating treatment effects in randomised controlled trials. In a randomised trial we have the treatment assignment variable X, and an often used primary analysis is to fit a simple Cox model with X as the only covariate. This gives an estimated hazard ratio comparing the hazard in the treatment group compared to the control, and this is assumed constant over time. In any trial, there will almost certainly exist other variables Z, some of which might be measured, and some of which will always be unmeasured, and which influence the outcome. At baseline, X and Z are statistically independent as a result of randomisation, which of course is the reason randomisation in general allows us to make a causal statement about the treatment effect - we need not worry about confounding.

Read more