Is the two sample t-test/ANOVA really biased in RCTs?

A couple of months ago I came across this paper, “Bias, precision and statistical power of analysis of covariance in the analysis of randomized trials with baseline imbalance: a simulation study”, published in the open access online journal BMC Medical Research Methodology, by Egbewale, Lewis and Sim. Using simulation studies, as the title says, the authors investigate the bias, precision and power of three analysis methods for a randomized trial with a continuous outcome and a baseline measure of the same variable, when there is an imbalance at baseline in the baseline measure. The three methods considered are ANOVA (a two-sample t-test here), an analysis of change (CSA, change from baseline to follow-up) scores, and analysis of covariance (ANCOVA), which corresponds to fitting a linear regression model with outcome measurement as the dependent variable, with randomized treatment and baseline measure as covariates.

In the Discussion section of the paper, the authors write “ANCOVA is known to produce unbiased estimates of treatment effect in the presence of baseline imbalance when groups are randomized [19,20]. ANOVA and CSA, however, produce biased estimates in such circumstances. For both ANOVA and CSA, the direction of bias is related to the direction of baseline imbalance, and bias is greatest when baseline imbalance, in either direction, is most pronounced.”

This conclusion surprised me, and so I submitted the following reader comment at the journal website:

The topic of covariate adjustment in randomised trials is an important one. However, I believe the simulation study and conclusions of Egbewale are flawed, and moreover my concerns mirror those of one of the paper’s original reviewer’s (Gillian Raab), which appear not to have been dealt with.

The authors investigate the performance of three methods for analysing randomised trials with a single continuous outcome and a corresponding baseline measure. They focus on the issue of baseline imbalance, and conduct a simulation study where trial data are generated such that there is, on average, an imbalance at baseline between the two treatment groups. From their simulation results, the authors conclude that ANCOVA is unbiased, whereas analysis of change scores and an unadjusted comparison of outcomes (ignoring baseline) are biased.

The problem is that the above approach for generating data does not correspond to how data arise in randomised trials. As the authors explain in the introduction, randomisation guarantees balance in expectation or on average, but not balance in any given study. But in the simulation study conducted, data are generated so that there is systematically (i.e. on average) imbalance at baseline. It is therefore unsurprising that in this case the ANOVA analysis (a t-test of outcomes ignoring baseline) is biased. All this demonstrates is that if one has a confounder, and one does not adjust for the confounder, estimates are biased. Put another way, the simulations show the following: if one performs repeated randomised trials where patients are more likely to be allocated to one of the treatment groups if they have higher baseline values, then a ANOVA or analysis of change scores is biased. But of course this is not how patients are allocated to groups in a simple randomised trial!

In a randomised trial, provided that the randomisation procedure is not compromised, all three of the methods considered by the authors are unbiased, at least according to the statistical definition of bias of an estimator as the difference between the expectation of the estimator and the true parameter value. The methods do however differ in terms of precision/efficiency, and previously it has been shown that ANCOVA is superior in this regard to the other two methods. All of these results can be found in the following paper:

Yang L, Tsiatis A (2001). Efficiency study of estimators for a treatment effect in a pretest-posttest trial. The American Statistician; 55: 314-321.

The reviewer report which I referred to can be accessed here, which is available because the journal makes all reviews and submitted versions of the paper freely available.

I received an automated response to my submitted comment (14th June 2014), stating that my comment would be moderated, and would appear online provided it contributes to the topic and complies with the journal’s standard terms and conditions. However, a week later, nothing had happened, so I emailed the journal. They explained that my comment was being processed. Now, a further month has passed, and my comment has still not appeared. I’m not sure why, but thought I would post it here in the meantime.

Curious to know whether this issue had occurred before, I performed a Google Search, and came across the following blog post in which someone else apparently had a similar problem with their comment being posted online at the journal BMC Cancer. In this instance the journal editor responded to the blog post, explaining that the journal had been in the process of revamping the comments system, and also that this person’s original comment violated the journal’s comment policy because it linked to the person’s blog page (something I did not do).

No doubt there is a good reason for the delay in my comment going online. Nevertheless, such delays are unfortunate since they partly negate the positive benefits of online rapid response at scientific journals.

UPDATE (28th July 2014)
I emailed BMC again to ask about my comment. Apparently it had not appeared due to a technical error with their system, but it is now online.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.