Previously I wrote about how when linear regression is introduced and derived, it is almost always done assuming the covariates/regressors/independent variables are fixed quantities. As I wrote, in many studies such an assumption does not match reality, in that both the regressors and outcome in the regression are realised values of random variables. I showed that the usual ordinary least squares (OLS) estimators are unbiased with random covariates, and that the usual standard error estimator, derived assuming fixed covariates, is unbiased with random covariates. This gives us some understand of the behaviour of these estimators in the random covariate setting.

Here I'll take a different approach, and appeal to the powerful theory of estimating equations. It turns out that many of the statistical estimators we use can be expressed as being the solutions to a set of estimating equations. The theory is powerful because it allows us to derive the asymptotic (large sample) behaviour of the estimators, and also gives us a consistent estimator of variance (of the parameter estimator), enabling us to find standard errors and confidence intervals. An excellent article introducing this theory, by Stefanski and Boos, can be found here. For further details, I'd highly recommend Tsiatis' book Semiparametric Theory and Missing Data, which covers estimating equation theory in semiparametric models.

To recall the linear regression, suppose we have, for each subject, an outcome Y and vector of predictors X. To keep the derivations simple, I will assume the first component of X includes 1, representing a constant intercept. The fundamental modelling assumption is then that , where is a column vector of regression coefficients to be estimated. The OLS estimator of is usually expressed as:

where bold and respectively denote the vector and matrix containing the values of Y and X for all n subjects in the sample. It is easy to show that the OLS estimator is also given by the value of which solves the following estimating equation:

where and denote the values of Y and X for the ith subject.

**Asymptotic distribution**

Now we can begin to start applying the theory of estimating equations. First, under certain regularity conditions, the theory tells that the distribution of the estimator converges to that of a (multivariate) normal distribution as the sample size n tends to infinity. Importantly, this holds irrespective of whether the residuals are normally distributed or not, and also irrespective of whether they have constant variance.

**Consistency**

Next, theory says that the estimator will be consistent if the estimating function, which here is , has expectation zero when evaluated at the true value of . Consistency means that for large sample sizes, the estimator will be close to the true population parameter value with high probability. To check that this condition holds for the OLS estimators, we must find the expectation . To do this, we make use of the law of total expectation:

If the model is correctly specified, , and so . The estimating function thus has mean zero. We can therefore conclude the OLS estimator is consistent for .

**Variance**

We now turn to the variance of the estimator. Without for the moment making any further assumptions other than that , estimating equation theory says that with a sample size of n, the estimator has variance

where denotes the matrix is equal to minus the derivative of the estimating function with respect to the parameter , denotes the variance covariance matrix of the estimating function, and denotes the true value of . First we find the second of these matrices, using the law of total variance

Since , the second term here is zero. If we write , we have

In this post, let's suppose the residuals have constant variance (I'll come back to the non-constant variance case in a later post), so that . Then

Turning now to the matrix , taking minus the derivative (with respect to ) of the estimating function, we have

Together, we thus have that the variance of the OLS estimator is equal to

The matrix is the population (true) expectation of the product of the vector X with its transpose. Similarly, denotes the population residual variance. To estimate the variance in practice, we can use the expression in the previous equation, replacing by its usual sample estimate, . The matrix can be estimated by its empirical (sample) mean

The variance of the OLS estimator can thus be estimated by

With a little bit of manipulation (which I won't show here), we can see that this is identical to the variance estimator used in OLS implementations, i.e.

We have thus shown that the usual OLS variance estimator, derived assuming the covariates are *fixed*, is a consistent estimator of the variance of OLS in repeated sampling in which the covariates are *random.*

In a future post, I'll look at how the preceding derivations can be extended to the case where we relax the assumption that the residuals have constant variance.

**A simple semiparametric model**

A final note. If the model only consists of the assumptions that and that the residuals have constant variance, the model is termed semiparametric. This is because although we have specified a certain aspect of the distribution of the observable random variables (specifically, how the mean of Y varies as a function of X, and that the residual variance is constant), all other aspects of the distribution are left arbitrary. The parametric component corresponds to the finite dimensional parameters and , whilst the non-parametric component corresponds to all the other aspects of the joint distribution of Y and X which we have left arbitrary.

Thank you for an interesting post! I am a self-taught in statistics, so please bear with me if I seem to talk garbage !

Under the assumption that regressors can be random variables, are there methods in statistical literature that use regressors as dependent variables along with the original response variable (multivariate response)? One could then have a model for the original response variable (y) with parameter vector (P) and a sub-model for the regressors (x) with a parameter vector (B). In that way the variance covariance matrix would directly correlate the parameter vectors P and B where the diagonal elements would represent the variance of the parameters and the off-diagonals would give us an estimate of the effect of regressor on the original response variable (y).

The sub-model for the regressor would have an estimated mean equal to the sample mean and the variance representing the sample standard deviation (under the assumption that the regressor is measured error free - no noise).

My dilemma is - how to use the covariance between the regressor and response parameters to get an estimate of the "regression coefficient" where one unit of change in regressor will change the response y by "coefficient times"

Again, please consider my non-expertise in this matter !

Thanks! You could specify (and fit) a multivariate/joint model f(Y|X,theta1)f(X|theta2), where theta1 are the parameters describing the dependence of Y on X, and theta2 are parameters describing the marginal distribution of the regressors. However, if you are only interested in the parameters in the model for Y|X, and you don't (in your model) assume any relationship between theta1 and theta2, then there is nothing to be gained by modelling the marginal distribution of the regressors. That this is true partly justifies the common practice of treating the regressors as if they were fixed even in situations (as often occurs) that they can be considered as random as the dependent/response variable.

I'm not sure if that helps answer you question though?

On the topic of random predictors/covariates, and the fact that standard statistical theory treats predictors as fixed, I'd highly recommend looking at the following paper which I have just come across: Models as Approximations: How Random Predictors and Model Violations Invalidate Classical Inference in Regression