Linear regression with random regressors, part 2

Previously I wrote about how when linear regression is introduced and derived, it is almost always done assuming the covariates/regressors/independent variables are fixed quantities. As I wrote, in many studies such an assumption does not match reality, in that both the regressors and outcome in the regression are realised values of random variables. I showed that the usual ordinary least squares (OLS) estimators are unbiased with random covariates, and that the usual standard error estimator, derived assuming fixed covariates, is unbiased with random covariates. This gives us some understand of the behaviour of these estimators in the random covariate setting.

Here I'll take a different approach, and appeal to the powerful theory of estimating equations. It turns out that many of the statistical estimators we use can be expressed as being the solutions to a set of estimating equations. The theory is powerful because it allows us to derive the asymptotic (large sample) behaviour of the estimators, and also gives us a consistent estimator of variance (of the parameter estimator), enabling us to find standard errors and confidence intervals. An excellent article introducing this theory, by Stefanski and Boos, can be found here. For further details, I'd highly recommend Tsiatis' book Semiparametric Theory and Missing Data, which covers estimating equation theory in semiparametric models.

To recall the linear regression, suppose we have, for each subject, an outcome Y and vector of predictors X. To keep the derivations simple, I will assume the first component of X includes 1, representing a constant intercept. The fundamental modelling assumption is then that E(Y|X)=X^{T}\beta, where \beta is a column vector of regression coefficients to be estimated. The OLS estimator of \beta is usually expressed as:

\hat{\beta} = (\mathbf X^{T} \mathbf X)^{-1} \mathbf X^{T} \mathbf Y

where bold \mathbf Y and \mathbf X respectively denote the vector and matrix containing the values of Y and X for all n subjects in the sample. It is easy to show that the OLS estimator is also given by the value of \beta which solves the following estimating equation:

\sum^{n}_{i=1}X_{i}(Y_{i}-X^{T}_{i}\beta) = 0

where Y_{i} and X_{i} denote the values of Y and X for the ith subject.

Asymptotic distribution

Now we can begin to start applying the theory of estimating equations. First, under certain regularity conditions, the theory tells that the distribution of the estimator \hat{\beta} converges to that of a (multivariate) normal distribution as the sample size n tends to infinity. Importantly, this holds irrespective of whether the residuals \epsilon=Y-X^{T}\beta are normally distributed or not, and also irrespective of whether they have constant variance.


Next, theory says that the estimator will be consistent if the estimating function, which here is X(Y-X^{T}\beta), has expectation zero when evaluated at the true value of \beta. Consistency means that for large sample sizes, the estimator will be close to the true population parameter value with high probability. To check that this condition holds for the OLS estimators, we must find the expectation E(X(Y-X^{T}\beta)). To do this, we make use of the law of total expectation:


If the model is correctly specified, E(Y|X)=X^{T}\beta, and so E(Y-X^{T}\beta|X)=0. The estimating function thus has mean zero. We can therefore conclude the OLS estimator is consistent for \beta.


We now turn to the variance of the estimator. Without for the moment making any further assumptions other than that E(Y|X)=X^{T}\beta, estimating equation theory says that with a sample size of n, the estimator has variance


where A(\beta) denotes the matrix is equal to minus the derivative of the estimating function with respect to the parameter \beta, B(\beta) denotes the variance covariance matrix of the estimating function, and \beta^{*} denotes the true value of \beta. First we find the second of these matrices, using the law of total variance


Since E(X(Y-X^{T}\beta)|X)=0, the second term here is zero. If we write \epsilon=Y-X^{T}\beta, we have


In this post, let's suppose the residuals have constant variance (I'll come back to the non-constant variance case in a later post), so that Var(\epsilon|X)=\sigma^{2}. Then


Turning now to the matrix A(\beta), taking minus the derivative (with respect to \beta) of the estimating function, we have

A(\beta)=E(-\frac{\partial}{\partial \beta} X(Y-X^{T}\beta)) = E(XX^{T})

Together, we thus have that the variance of the OLS estimator \hat{\beta} is equal to


The matrix E(XX^{T}) is the population (true) expectation of the product of the vector X with its transpose. Similarly, \sigma^{2} denotes the population residual variance. To estimate the variance in practice, we can use the expression in the previous equation, replacing \sigma^{2} by its usual sample estimate, \hat{\sigma}^{2}. The matrix E(XX^{T}) can be estimated by its empirical (sample) mean


The variance of the OLS estimator \hat{\beta} can thus be estimated by


With a little bit of manipulation (which I won't show here), we can see that this is identical to the variance estimator used in OLS implementations, i.e.

\hat{\sigma}^{2}(\mathbf X^{T} \mathbf X)^{-1}

We have thus shown that the usual OLS variance estimator, derived assuming the covariates are fixed, is a consistent estimator of the variance of OLS in repeated sampling in which the covariates are random.

In a future post, I'll look at how the preceding derivations can be extended to the case where we relax the assumption that the residuals have constant variance.

A simple semiparametric model

A final note. If the model only consists of the assumptions that E(Y|X)=X^{T}\beta and that the residuals have constant variance, the model is termed semiparametric. This is because although we have specified a certain aspect of the distribution of the observable random variables (specifically, how the mean of Y varies as a function of X, and that the residual variance is constant), all other aspects of the distribution are left arbitrary. The parametric component corresponds to the finite dimensional parameters \beta and \sigma^{2}, whilst the non-parametric component corresponds to all the other aspects of the joint distribution of Y and X which we have left arbitrary.

3 thoughts on “Linear regression with random regressors, part 2

  1. Thank you for an interesting post! I am a self-taught in statistics, so please bear with me if I seem to talk garbage !

    Under the assumption that regressors can be random variables, are there methods in statistical literature that use regressors as dependent variables along with the original response variable (multivariate response)? One could then have a model for the original response variable (y) with parameter vector (P) and a sub-model for the regressors (x) with a parameter vector (B). In that way the variance covariance matrix would directly correlate the parameter vectors P and B where the diagonal elements would represent the variance of the parameters and the off-diagonals would give us an estimate of the effect of regressor on the original response variable (y).

    The sub-model for the regressor would have an estimated mean equal to the sample mean and the variance representing the sample standard deviation (under the assumption that the regressor is measured error free - no noise).

    My dilemma is - how to use the covariance between the regressor and response parameters to get an estimate of the "regression coefficient" where one unit of change in regressor will change the response y by "coefficient times"

    Again, please consider my non-expertise in this matter !

    • Thanks! You could specify (and fit) a multivariate/joint model f(Y|X,theta1)f(X|theta2), where theta1 are the parameters describing the dependence of Y on X, and theta2 are parameters describing the marginal distribution of the regressors. However, if you are only interested in the parameters in the model for Y|X, and you don't (in your model) assume any relationship between theta1 and theta2, then there is nothing to be gained by modelling the marginal distribution of the regressors. That this is true partly justifies the common practice of treating the regressors as if they were fixed even in situations (as often occurs) that they can be considered as random as the dependent/response variable.

      I'm not sure if that helps answer you question though?

Leave a Reply