Adjusting for baseline covariates in randomized controlled trials

Randomized controlled trials constitute what are generally considered to be the gold standard design for evaluating the effects of some intervention or treatment of interest. The fact that participants are randomized to the two (sometimes more) groups ensures that, at least in expectation, the two treatment groups are balanced in respect of both measured, and importantly, unmeasured factors which may influence the outcome. As a consequence, differences in outcomes between the two groups can be attributed to the effect of being randomized to the treatment rather than the control (which often would be another treatment).

Provided the randomization has not been compromised, the treatment effect estimate from trial is unbiased, even without adjusting for any baseline covariates. This is the case even when there appears to be an imbalance in respect of some baseline variable between the groups. This is because bias is defined as whether the estimator (given by our statistical procedure, like linear regression) has expectation in repeated samples equal to the target parameter. Sometimes the estimate will be above the true value, and sometimes below, but so long as on average it equals the target, we say the estimator is unbiased.

Now let’s consider adjusting for one or more baseline covariates, measured at or before randomization in our analysis. This is often done through fitting a regression model for the outcome, with the randomized group and baseline variables as covariates. What do we gain by the adjustment? If the baseline covariate(s) is moderately correlated with the outcome, differences between the outcome values which can be attributed to differences in the baseline covariate can be removed, leading to a more precise (for linear models) estimate of treatment effect. When, by chance, there is some imbalance in the baseline covariate between groups, the regression model in effect adjusts the outcome values to account for the differences in the baseline covariate between the two groups. Although this sounds like we are adjusting for confounding, we are not – in repeated sampling there is no systematic difference in the distribution of the baseline covariate between the treatment groups as a consequence of randomization. Even when the two treatment groups are well balanced in respect of the baseline covariates, adjusting for the covariates will (for linear models) give a more precise treatment effect estimate.

We can illustrate this using R. We will simulate data for a small study of n=50 subjects, and randomized 50% to treat=0 and 50% to treat=1. We will then generate the outcome Y as depending on a baseline covariate X and the treatment indicator:

```n <- 50
set.seed(31255)
x <- rnorm(n)
treat <- 1*(runif(n)<0.5)
y <- x+treat+rnorm(n)
```

Here the true treatment effect for group 1 vs group 0 is 1. If we perform an unadjusted analysis (which is simply in the two group means), we obtain:

```> summary(lm(y~treat))

Call:
lm(formula = y ~ treat)

Residuals:
Min      1Q  Median      3Q     Max
-4.8977 -0.9312  0.0990  1.3050  2.7046

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  -0.5556     0.3268  -1.700 0.095571 .
treat         1.8113     0.4447   4.073 0.000173 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 1.567 on 48 degrees of freedom
Multiple R-squared:  0.2568,    Adjusted R-squared:  0.2413
F-statistic: 16.59 on 1 and 48 DF,  p-value: 0.0001731
```

The estimated treatment effect is 1.81, with a standard error of 0.44. Now we perform an analysis where we additionally adjust for baseline:

```> summary(lm(y~treat+x))

Call:
lm(formula = y ~ treat + x)

Residuals:
Min      1Q  Median      3Q     Max
-3.4975 -0.6407  0.1508  0.7619  1.6868

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)   0.1874     0.2440   0.768  0.44635
treat         0.9741     0.3234   3.013  0.00416 **
x             1.1391     0.1521   7.491 1.48e-09 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 1.069 on 47 degrees of freedom
Multiple R-squared:  0.6613,    Adjusted R-squared:  0.6468
F-statistic: 45.87 on 2 and 47 DF,  p-value: 8.955e-1
```

Now the treatment effect estimate is 0.97 with a standard error of 0.32. The estimated obtained by adjusting for X is much closer to the true value of 1, and the standard error is smaller, indicating a more precise estimate. The amount of precision gained by adjusting for covariates depends on the strength of the correlation between the covariate(s) and outcome.

Assumptions when adjusting for covariates
We have seen that adjusting for a baseline covariate can increase the precision of our treatment effect estimate. But to do this, we have fitted a more complex regression model. This regression model assumes that the mean of Y depends linearly on X, and that the slope of this relationship is the same in the two groups. There is of no guarantee that these assumptions will hold in any given study. We might therefore be concerned about using a covariate adjusted analysis, in case these assumptions do not hold.

Fortunately, it turns out that for linear models, when treatment is randomized, violations of the assumptions of a linear association between Y and X and of the slopes being the same in the two groups do not affect the (asymptotic) unbiasedness of the treatment effect estimate. This robustness is a consequence of the fact that X and the treatment indicator are statistically independent, which is guaranteed by randomization. The result can be proved using semi-parametric estimating equation theory - see this paper and also Chapter 5 in the book Semiparametric Theory and Missing Data by Tsiatis.

This means that, in a randomized trial setting, we can advocate adjusting for baseline as our primary analysis method. Of course, it is important that the analysis approach is pre-specified. See the recent paper by Saquib et al on the topic of covariate adjustment and pre-specification in analysis plans for trials.

Covariate adjustment with binary outcomes
The preceding discussion was within the context of a continuous outcome, where we would ordinarily use a linear regression outcome model. What if the outcome is of a different type? Perhaps the most common is a binary outcome. In this setting, things are a little bit more complicated. This is principally because with a binary outcome, which is typically modelled using logistic regression, the treatment effect parameter we are estimating changes when we adjust for baseline covariates, due to the non-collapsibility of the odds ratio - the marginal and conditional odds ratio for the treatment effect differ (although the difference is often not too large). This is rather unfortunate, as it means that trials of the same treatments, in the same population, will estimate (in expectation) different treatment effects if in their analyses they adjust for different baseline covariates. Also, it turns out that adjusting for a baseline covariate in logistic regression reduces precision of the treatment effect estimate, but (apparently paradoxically) increases the power of the corresponding hypothesis test.

Recently methods have been developed for binary outcomes which allow adjustment for covariates which target the marginal odds ratio, allowing for improved precision and power for testing that this parameter is 1, overcoming the preceding issues. Tsiatis and colleagues have developed an approach based on estimating equation theory, which I discuss in another post.

van der Laan and colleagues have developed an approach based on their targeted maximum likelihood methodology. Both are somewhat more complicated to implement than simply fitting a logistic regression model with the baseline variables as covariates, but they will in general give improved efficiency compared the unadjusted estimate, and both approaches give (asymptotically) unbiased estimates even if assumed functional form for the dependence of Y on X is incorrectly specified. This is analogous to the robustness property that we have for the linear model case.

More about the targeted maximum likelihood approach in general can be found in van der Laan's book Targeted Learning (chapter 11 specifically looks at the application in the randomized trial setting).

7 thoughts on “Adjusting for baseline covariates in randomized controlled trials”

1. Your R code above (and the ensuing explanation) also apples to the general case when you have a regression model with a continuous and a categorical covariate, does it not? In other words, the simulation study you carry out is not specific to the case of a randomized trial comparing where focus is on treatment effect, is it?

• Correct. It applies more generally, with multiple covariates , some of which could be categorical and some continuous.

2. But what about adjusting for baseline covariates in a logistic model when one then intent to report risk ratios obtained through standardization (see Cummings “Methods for estimating adjusted risk ratios” in Stata Journal for details on standardization) Risk ratios are collapsible, so theoretically one could expect narrower confidence intervals and better power, while maintaining the comparability between risk ratios from other studies. Or is the underlying logistic regression causing problems? What are your comments on that?

• Thanks Aleksandra. Yes, you can use a logistic regression model to estimate the risk ratio for treatment by standardisation, as you say (also see my post here, which although it says is about observational studies, also applies to RCTs).

It in fact turns out, although it is not I don’t think widely appreciated, that the resulting estimator of the risk ratio is (asymptotically) unbiased even if the logistic regression model you use is mis-specified – see this paper by Rosenblum & van der Laan.