Linear regression is one the work horses of statistical analysis, permitting us to model how the expectation of an outcome Y depends on one or more predictors (or covariates, regressors, independent variables) X. Previously I wrote about the assumptions required for validity of ordinary linear regression estimates and their inferential procedures (tests, confidence intervals) assuming (as we often do) that the residuals are normally distributed with constant variance.
Jonathan Bartlett
Assumptions for linear regression
Linear regression is one of the most commonly used statistical methods; it allows us to model how an outcome variable depends on one or more predictor (sometimes called independent variables) . In particular, we model how the mean, or expectation, of the outcome varies as a function of the predictors:
A/B testing and Pearson’s chi-squared test of independence
A good friend of mine asked me recently about how to do A/B testing. As he explained, A/B testing refers to the process in which when someone visits a website, the site sends them to one of two (or possibly more) different ‘landing’ or home pages, and which one they are sent to is chosen at random. The purpose is to determine which page version generates a superior outcome, e.g. which page generates more advertising revenue, or which which page leads a greater proportion of visitors to continue visiting the site.