Multiple imputation using random forest

In recent years a number of researchers have proposed using machine learning techniques to impute missing data. One of these is the so called random forest technique. I recently gave a talk at the International Biometric Society’s conference in Florence, Italy, on the topic. In case it is of interest to anyone, the slides of the talk are available below.

Slides from talk at IBC2014 on random forest multiple imputation

Multiple imputation with interactions and non-linear terms

Multiple imputation has become an extremely popular approach to handling missing data, for a number of reasons. One is that once the imputed datasets have been generated, they can each be analysed using standard analysis methods, and the results pooled using Rubin’s rules. However, in addition to the missing at random assumption, for multiple imputation to give unbiased point estimates the model(s) used to impute missing data need to be (at least approximately) correctly specified. Because of this, care must be taken when choosing the imputation model.

What constitutes a reasonable imputation model will obviously depend on the dataset and situation at hand. One situation which is commonly encountered, but where it is not obvious what one should do, is where the dataset, or the model(s) which will be fitted after imputation, contains interaction terms or non-linear terms such as squared terms.

Read more

When is complete case analysis unbiased?

My primary research area is that of missing data. Missing data are a common issue in empirical research. Within biostatistics missing data are almost ubiquitous – patients often do not come back to visits as planned, for a variety of reasons. In surveys participants may move in between survey waves, we lose contact with them, such that we are missing their responses to the questions we would have liked to asked them.

Read more