Estimating effects when outcomes are truncated by death

A common situation arises when one wants to estimate the effect of a treatment or exposure at some time point t in an observational cohort or randomised trial. For example, what is the mean difference in some outcome Y at time t between the two groups of interest. To make things a bit simpler, let's suppose that subjects were allocated to the two groups (e.g. two treatments A and B) randomly, as in a randomised trial. Now suppose that some of the subjects die before time t, such that their outcome Y is not observed. Then we can no longer compare Y between the two groups in all subjects, because some values of Y are missing, or truncated by death.

Read more

Matching analysis to design: stratified randomization in trials

Yesterday I was re-reading the recent nice articles by Brennan Kahan and Tim Morris on how to analyse trials which use stratified randomization. Stratified randomization is commonly used in trials, and involves randomizing in a certain way to ensure that the treatments are assigned in a balanced way within strata defined by chosen baseline covariates.

Read more

Combining bootstrapping with multiple imputation

Multiple imputation (MI) is a popular approach to handling missing data. In the final part of MI, inferences for parameter estimates are made based on simple rules developed by Rubin. These rules rely on the analyst having a calculable standard error for their parameter estimate for each imputed dataset. This is fine for standard analyses, e.g. regression models fitted by maximum likelihood, where standard errors based on asymptotic theory are easily calculated. However, for many analyses analytic standard errors are not available, or are prohibitive to find by analytical methods. For such methods, if there were no missing data, an attractive approach for finding standard errors and confidence intervals is the method of bootstrapping. However, if one is using MI to handle missing data, and would ordinarily use bootstrapping to find standard errors / confidence intervals, how should these be combined?

Read more

Multiple imputation for missing covariates in Poisson regression

This week I've released a new version of the smcfcs package for R on CRAN. SMC-FCS performs multiple imputation for missing covariates in regression models, using an adaption of the chained equations / fully conditional specification approach to imputation, which we called Substantive Model Compatible Fully Conditional Specification MI.

The new version of smcfcs now supports Poisson regression outcome / substantive models, which are often used for count outcomes. Future additions will add support for negative binomial regression models, which are often used to model over dispersed count outcomes, and also support for offsets, which are often needed when fitting count regression models.

On the missing at random assumption in longitudinal trials

The missing at random (MAR) assumption plays an extremely important role in the context of analysing datasets subject to missing data. Its importance lies primarily in the fact that if we are willing to assume data are MAR, we can identify (estimate) target parameters. There are a variety of methods for handling data which are assumed to be MAR. One approach is estimation of a model for the variables of interest using the method of maximum likelihood. In the context of randomised trials, primary analyses are sometimes based on methods which are valid under MAR, such linear mixed models (MMRM). A key concern however is whether the MAR assumption is plausibly valid in any given situation.

Read more

Running simulations in R using Amazon Web Services

I've recently been working on some simulation studies in R which involve computer intensive MCMC sampling. Ordinarily I would use my institution's computing cluster to do these, making use of the large number of computer cores, but a temporary lack of availability of this led me to investigate using Amazon's Web Services (AWS) system instead. In this post I'll describe the steps I went through to get my simulations going in R. As background, I am mainly a Windows user, and had never really used the Linux operating system. Nonetheless, the process wasn't actually too tricky to get going in the end, and it's enabled me to get the simulations completed far far more quickly than if I'd just used my desktop's 8 cores. The advantages of using a cloud computing resource (from my perspective) is that in principle you can use as little or as much computing power as you need or want, and it is always available - you don't have to compete against other user's demands, as would typically be the case on an academic institution's computer cluster.

Read more