Wald vs likelihood ratio test

When taking a course on likelihood based inference, one of the key topics is that of testing and confidence interval construction based on the likelihood function. Usually the Wald, likelihood ratio, and score tests are covered. In this post I'm going to revise the advantages and disadvantages of the Wald and likelihood ratio test. I will focus on confidence intervals rather than tests, because the deficiencies of the Wald approach are more transparently seen here.

Read more

The t-test and robustness to non-normality

The t-test is one of the most commonly used tests in statistics. The two-sample t-test allows us to test the null hypothesis that the population means of two groups are equal, based on samples from each of the two groups. In its simplest form, it assumes that in the population, the variable/quantity of interest X follows a normal distribution N(\mu_{1},\sigma^{2}) in the first group and is N(\mu_{2},\sigma^{2}) in the second group. That is, the variance is assumed to be the same in both groups, and the variable is normally distributed around the group mean. The null hypothesis is then that \mu_{1}=\mu_{2}.

Read more

A/B testing and Pearson's chi-squared test of independence

A good friend of mine asked me recently about how to do A/B testing. As he explained, A/B testing refers to the process in which when someone visits a website, the site sends them to one of two (or possibly more) different 'landing' or home pages, and which one they are sent to is chosen at random. The purpose is to determine which page version generates a superior outcome, e.g. which page generates more advertising revenue, or which which page leads a greater proportion of visitors to continue visiting the site.

Read more

The difference between the sample mean and the population mean

Someone recently asked me what the difference was between the sample mean and the population mean. This is really a question which goes to the heart of what it means to perform statistical inference. Whatever field we are working in, we are usually interested in answering some kind of question, and often this can be expressed in terms of some numerical quantity, e.g. what is the mean income in the US. This question can be framed mathematically by saying we would like to know the value of a parameter describing some distribution. In the case of the mean US income, the parameter is the mean of the distribution of US incomes. Here the population is the US population, and the population mean is the mean of all the incomes in the US population. For our objective, the population mean is the parameter of interest.

Read more

The miracle of the bootstrap

In my opinion one of the most useful tools in the statistician's toolbox is the bootstrap. Let's suppose that we want to estimate something slightly non-standard. We have written a program in our favourite statistical package to calculate the estimate. But in addition to the estimate itself, we need a measure of its precision, as given by its standard error. We saw in an earlier post how the standard error can be calculated for the sample mean. With a non-standard estimator, it may too difficult to derive an analytical expression for an estimate of the standard error. Or in some situations it may not be worth the intellectual effort of working out an analytical standard error.

Read more

Standard deviation versus standard error

A topic which many students of statistics find difficult is the difference between a standard deviation and a standard error.

The standard deviation is a measure of the variability of a random variable. For example, if we collect some data on incomes from a sample of 100 individuals, the sample standard deviation is an estimate of how much variability there is in incomes between individuals. Let's suppose the average (mean) income in the sample is $100,000, and the (sample) standard deviation is $10,000. The standard deviation of $10,000 gives us an indication of how much, on average, incomes deviate from the mean of $100,000.

Read more