Linear regression with random regressors, part 2

Previously I wrote about how when linear regression is introduced and derived, it is almost always done assuming the covariates/regressors/independent variables are fixed quantities. As I wrote, in many studies such an assumption does not match reality, in that both the regressors and outcome in the regression are realised values of random variables. I showed that the usual ordinary least squares (OLS) estimators are unbiased with random covariates, and that the usual standard error estimator, derived assuming fixed covariates, is unbiased with random covariates. This gives us some understand of the behaviour of these estimators in the random covariate setting.

Here I'll take a different approach, and appeal to the powerful theory of estimating equations. It turns out that many of the statistical estimators we use can be expressed as being the solutions to a set of estimating equations. The theory is powerful because it allows us to derive the asymptotic (large sample) behaviour of the estimators, and also gives us a consistent estimator of variance (of the parameter estimator), enabling us to find standard errors and confidence intervals. An excellent article introducing this theory, by Stefanski and Boos, can be found here. For further details, I'd highly recommend Tsiatis' book Semiparametric Theory and Missing Data, which covers estimating equation theory in semiparametric models.

To recall the linear regression, suppose we have, for each subject, an outcome Y and vector of predictors X. To keep the derivations simple, I will assume the first component of X includes 1, representing a constant intercept. The fundamental modelling assumption is then that E(Y|X)=X^{T}\beta, where \beta is a column vector of regression coefficients to be estimated. The OLS estimator of \beta is usually expressed as:

\hat{\beta} = (\mathbf X^{T} \mathbf X)^{-1} \mathbf X^{T} \mathbf Y

where bold \mathbf Y and \mathbf X respectively denote the vector and matrix containing the values of Y and X for all n subjects in the sample. It is easy to show that the OLS estimator is also given by the value of \beta which solves the following estimating equation:

\sum^{n}_{i=1}X_{i}(Y_{i}-X^{T}_{i}\beta) = 0

where Y_{i} and X_{i} denote the values of Y and X for the ith subject.

Asymptotic distribution

Now we can begin to start applying the theory of estimating equations. First, under certain regularity conditions, the theory tells that the distribution of the estimator \hat{\beta} converges to that of a (multivariate) normal distribution as the sample size n tends to infinity. Importantly, this holds irrespective of whether the errors \epsilon=Y-X^{T}\beta are normally distributed or not, and also irrespective of whether they have constant variance.

Consistency

Next, theory says that the estimator will be consistent if the estimating function, which here is X(Y-X^{T}\beta), has expectation zero when evaluated at the true value of \beta. Consistency means that for large sample sizes, the estimator will be close to the true population parameter value with high probability. To check that this condition holds for the OLS estimators, we must find the expectation E(X(Y-X^{T}\beta)). To do this, we make use of the law of total expectation:

E(X(Y-X^{T}\beta))=E(E(X(Y-X^{T}\beta)|X))=E(XE(Y-X^{T}\beta|X))

If the model is correctly specified, E(Y|X)=X^{T}\beta, and so E(Y-X^{T}\beta|X)=0. The estimating function thus has mean zero. We can therefore conclude the OLS estimator is consistent for \beta.

Variance

We now turn to the variance of the estimator. Without for the moment making any further assumptions other than that E(Y|X)=X^{T}\beta, estimating equation theory says that with a sample size of n, the estimator has variance

n^{-1}A(\beta^{*})^{-1}B(\beta^{*})(A(\beta^{*})^{-1})^{T}

where A(\beta) denotes the matrix is equal to minus the derivative of the estimating function with respect to the parameter \beta, B(\beta) denotes the variance covariance matrix of the estimating function, and \beta^{*} denotes the true value of \beta. First we find the second of these matrices, using the law of total variance

B(\beta)=Var(X(Y-X^{T}\beta))=E(Var(X(Y-X^{T}\beta)|X))+Var(E(X(Y-X^{T}\beta)|X))

Since E(X(Y-X^{T}\beta)|X)=0, the second term here is zero. If we write \epsilon=Y-X^{T}\beta, we have

B(\beta)=Var(X(Y-X^{T}\beta))=E(Var(\epsilon|X)XX^{T})

In this post, let's suppose the errors have constant variance (I'll come back to the non-constant variance case in a later post), so that Var(\epsilon|X)=\sigma^{2}. Then

B(\beta)=\sigma^{2}E(XX^{T})

Turning now to the matrix A(\beta), taking minus the derivative (with respect to \beta) of the estimating function, we have

A(\beta)=E(-\frac{\partial}{\partial \beta} X(Y-X^{T}\beta)) = E(XX^{T})

Together, we thus have that the variance of the OLS estimator \hat{\beta} is equal to

n^{-1}E(XX^{T})^{-1}\sigma^{2}E(XX^{T})E(XX^{T})^{-1}=n^{-1}\sigma^{2}E(XX^{T})^{-1}

The matrix E(XX^{T}) is the population (true) expectation of the product of the vector X with its transpose. Similarly, \sigma^{2} denotes the population error variance. To estimate the variance in practice, we can use the expression in the previous equation, replacing \sigma^{2} by its usual sample estimate, \hat{\sigma}^{2}. The matrix E(XX^{T}) can be estimated by its empirical (sample) mean

\hat{E}(XX^{T})=n^{-1}\sum^{n}_{i=1}X_{i}X_{i}^{T}

The variance of the OLS estimator \hat{\beta} can thus be estimated by

n^{-1}\hat{\sigma}^{2}\hat{E}(XX^{T})^{-1}=\hat{\sigma}^{2}(\sum^{n}_{i=1}X_{i}X_{i}^{T})^{-1}

With a little bit of manipulation (which I won't show here), we can see that this is identical to the variance estimator used in OLS implementations, i.e.

\hat{\sigma}^{2}(\mathbf X^{T} \mathbf X)^{-1}

We have thus shown that the usual OLS variance estimator, derived assuming the covariates are fixed, is a consistent estimator of the variance of OLS in repeated sampling in which the covariates are random.

In a future post, I'll look at how the preceding derivations can be extended to the case where we relax the assumption that the errors have constant variance.

A simple semiparametric model

A final note. If the model only consists of the assumptions that E(Y|X)=X^{T}\beta and that the errors have constant variance, the model is termed semiparametric. This is because although we have specified a certain aspect of the distribution of the observable random variables (specifically, how the mean of Y varies as a function of X, and that the error variance is constant), all other aspects of the distribution are left arbitrary. The parametric component corresponds to the finite dimensional parameters \beta and \sigma^{2}, whilst the non-parametric component corresponds to all the other aspects of the joint distribution of Y and X which we have left arbitrary.

7 thoughts on “Linear regression with random regressors, part 2

  1. Thank you for an interesting post! I am a self-taught in statistics, so please bear with me if I seem to talk garbage !

    Under the assumption that regressors can be random variables, are there methods in statistical literature that use regressors as dependent variables along with the original response variable (multivariate response)? One could then have a model for the original response variable (y) with parameter vector (P) and a sub-model for the regressors (x) with a parameter vector (B). In that way the variance covariance matrix would directly correlate the parameter vectors P and B where the diagonal elements would represent the variance of the parameters and the off-diagonals would give us an estimate of the effect of regressor on the original response variable (y).

    The sub-model for the regressor would have an estimated mean equal to the sample mean and the variance representing the sample standard deviation (under the assumption that the regressor is measured error free - no noise).

    My dilemma is - how to use the covariance between the regressor and response parameters to get an estimate of the "regression coefficient" where one unit of change in regressor will change the response y by "coefficient times"

    Again, please consider my non-expertise in this matter !

    • Thanks! You could specify (and fit) a multivariate/joint model f(Y|X,theta1)f(X|theta2), where theta1 are the parameters describing the dependence of Y on X, and theta2 are parameters describing the marginal distribution of the regressors. However, if you are only interested in the parameters in the model for Y|X, and you don't (in your model) assume any relationship between theta1 and theta2, then there is nothing to be gained by modelling the marginal distribution of the regressors. That this is true partly justifies the common practice of treating the regressors as if they were fixed even in situations (as often occurs) that they can be considered as random as the dependent/response variable.

      I'm not sure if that helps answer you question though?

  2. Hello,
    I have few comments about the post in the page.

    1. the letters Y and X in the section "variance" should be in boldface, as they are sampling variables and represent a colum vector and a matrix, respectively. I find kind of hard to follow the steps, written this way.

    2. the variance of the OLS estimator in this page seems quite different from the one you got at the page
    http://thestatsgeek.com/2013/08/30/why-regression-inference-assuming-fixed-predictors-is-still-valid-for-random-predictors/
    both for random predictors. If anything, because here it tends to zero as the cardinality n of the samples tends to infinity.
    How do you explain that?

    3. in this page you define "residuals" what is normally called "errors", the difference between the two being that for the
    latter we use the real (unknown) parameters \beta, whereas for the former notion we use OLS estimates and the sample variable observations. The common
    variance \sigma^2 is assumed for errors and not residuals, which in general have variance dependent on the sample's index.
    In this context, i do not understand what you mean by "usual sample estimate" of \sigma^2, as we do not know \beta.
    If you mean estimating \sigma^2 by means of the residuals variance then i would not define it exactly as an usual sample estimate.

    thanks for clarifying this.
    simone

    • Thanks Simone for your comments/questions.

      1. Sorry you find the notational convention hard to follow. I would say however that there is no standard convention for these things - indeed the book on semiparametric inference linked to in the article for example does not use boldface for random variables.

      2. The variance estimator in the preceding post was:

      \hat{\sigma}^{2}(\mathbf X^{T} \mathbf X)^{-1}

      where the \mathbf X matrix is the matrix of covariate values across all n subjects. At the bottom of the current post, the penultimate variance expression does indeed involve n^{-1}, and an expectation involving X (not boldface), which indicates a randomly sampled covariate vector for an individual subject. As the text then explains, further algebraic manipulation which is not shown in the post shows that you can re-express this variance estimator as the one given in the preceding post.

      3. Thank you - you are right. I shall edit the post to correct this.

  3. I am confused by the notation of X and X^T. In this article you say E(Y|X)=XTβ, which means X is a pxn matrix. In the previous article E(Y|X)=Xβ was used, where X is an nxp matrix. My understanding is that this is the normal representation of X.
    Some of the following results use X as an nxp matrix, like β^=(XTX)−1XTY, while some results use X as a pxn matrix, like X(Y−XTβ).
    I apologize for my math typesetting here.

    • Although it may not be that clear visually, the post contains both X without bold and X in bold. X without bold indicates a column vector of covariate values for an individual subject. Bold X represents the nxp matrix of covariate values across all n subjects.

Leave a Reply