ANCOVA in RCTs – model based standard errors are valid even under misspecification

A nice paper by Wang and colleagues has just been published in Biometrics which examines the robustness of ANCOVA (i.e. linear regression) for analysing continuous outcomes in randomised trials. By this they mean a linear regression which adjusts for randomised treatment group and baseline covariates. Yang and Tsiatis in 2001 showed that this estimator is consistent for the average treatment effect even if the model is misspecified, and as such can be recommended for general use in the analysis of such trials. They offered a variance estimator for the resulting treatment effect estimate that is valid even if the model is misspecified, and compared this in simulations to the usual model based variance estimator from ANCOVA (which they refer to as the OLS variance). Yang and Tsiatis reported that this OLS variance estimator performed well even when the linear model was misspecified, but suggested that the OLS variance estimator had some bias as the sample size was increased.

In their new paper Wang and colleagues prove that so long as the randomisation ratio is 1:1, the standard model based variance estimator for the adjusted treatment effect from ANCOVA is valid even under model misspecification. As such, this further strengthens support for using a linear regression model with adjustment for baseline covariates in randomised trials.

An important caveat to the result of Wang and colleagues is that they assumed that the treatment group is assigned completely at random, and in particular independently of the baseline covariates. This rules out certain randomisation schemes, such as stratified randomisation, where randomisation depends on the subject’s baseline covariates. Indeed we know that in this setting, if we don’t properly model the effects of the variables used to stratify the randomisation, our treatment effect variance estimates in general are not correct.

Running simulation studies in R

In my work and indeed blog posts on this site I often perform simulation studies. They can be invaluable in various ways for exploring and testing the performance of statistical methods under different conditions. Recently Tim Morris, Ian White and Michael Crowther published an excellent paper in Statistics in Medicine, freely available here, on how to plan and run simulation studies. The paper contains a wealth of useful guidance and advice on how to run simulation studies, and in particular highlights some things that can cause things to go wrong with inappropriate setting of random number seeds!

Tim has an accompanying Github repository with Stata code for their illustrative example from the paper, where they simulate survival data and analyse it using a number of different survival regression models. As part of the new MSc in Data Science & Statistics here at the University of Bath, I’ve put together a short introductory tutorial on performing simulation studies using R. It can be accessed here. I hope it gives a good introduction to the key elements of programming up a simulation study in R. If anyone has comments on it or thinks I’ve omitted something important that should be covered, please get in touch via email or a comment on this page.

Critical bug fix for smcfcs in Stata

At a recent missing data course run by a colleague, users of my multiple imputation program smcfcs in Stata 15.1 found that when imputing on a simulated dataset, smcfcs took much longer to run and issued many more rejection sampling warnings than those running using Stata 14.1. Moreover, the point estimates for the substantive/analysis model obtained by those using Stata 15.1 were dramatically different to those using Stata 14.1, with the former being very biased relative to the true parameter values.

Read more