Inference Books

Links provided to Amazon, from which I may earn an affiliate commission if you subsequently make a purchase.

In All Likelihood by Pawitan (Amazon Affiliate link)

In all likelihood takes the reader through a tour of likelihood based inference. Starting with a historical perspective on the major inferential philosophies, Pawitan goes on to explain the key concepts and properties of likelihood based inference. The beauty of this book is that the key concepts are explained using copious examples. Technical regularity conditions are largely left out, enabling one to understand the concepts without getting bogged down in details. The coverage is expansive – by the end Pawitan covers as important examples generalized linear models, survival analysis, the EM algorithm for obtained maximum likelihood estimates, estimating equations and quasilikelihood, empirical likelihood, non-parametric methods, and random-effects models. I have learnt a lot from this book, and continue to learn more the more I read it.

An Introduction to the Bootstrap by Efron and Tibshirani (Amazon Affiliate link)

I’ve previously written about ‘the miracle of the bootstrap’. This classic book by Efron and Tibshirani explains the concepts and details of the bootstrap extremely clearly. They begin by reviewing basic probability and inference, before describing the bootstrap principle – that by resampling from our observed data we can mimic the process of sampling from a population. They then describe how the bootstrap can be used to estimate standard errors, confidence intervals (of varying sophistication) and biases in estimators. The books covers a lot of ground, including non-parametric and parametric bootstrap approaches, the jackknife (leave one observation out at a time), and issues involved in assessing predictive ability of models. A great book.

Elements of Large-Sample Theory by Lehmann (Amazon Affiliate link)

This books goes through large sample theory from the ground up, requiring only calculus as a prerequisite. As you would expect, topics such as point estimation (bias, consistency, effiicency), hypothesis testing (size of the test, power, efficiency, and robustness), and confidence intervals. I really like the book because it is extremely readable, with liberal use of important examples to illustrate the general results, and because it shows how asymptotics can provide extremely useful results which can often be applied with finite samples (which is all we ever have in practice!). An example is the robustness of the classic two-sample test to violations of its assumption of normality.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.