Conditional randomization, standardization, and inverse probability weighting

In a previous post, I began following the developments in Miguel Hernán and James Robins’ soon to be published book, Causal Inference. There I gave an overview of the first topics they cover, namely potential outcomes, causal effects, and randomization. In this post I’ll continue, with some personal notes on the remaining parts of Chapter 2 of their book, on conditional randomization, standardization, and inverse probability weighting.

Read more

Bayesian inference: are parameters fixed or random?

Yesterday I had an interesting discussion with a friend about how parameters are thought of in Bayesian inference. Coming from a predominantly frequentist statistical education, I had somewhere along the line picked up the notion that for Bayesians, like frequentists, the model parameters (their true values) are unknown but fixed quantities. The prior distribution then represents the prior belief about the location of this fixed value, before the data are seen. Thus the prior distribution represents our uncertainty about the location of the unknown, but fixed, parameter value.

Read more

Potential outcomes, counterfactuals, causal effects, and randomization

Next week I’ll be attending the third UK Causal Inference Meeting, in Bristol. Causal inference has seen a tremendous amount of methodological development over the last 20 years, and recently a number of books have been published on the topic. In advance of attending the conference, I’ve been reading through a draft of the excellent book by Miguel Hernán (who is giving a pre-conference course) and James Robins on ‘Causal Inference’ (freely downloadable here). So far I’ve found the book highly readable and intuitive. As I’m working through it, I thought I’d write some posts giving overviews of some of the material covered, which I personally find useful to help cement the ideas in my own mind, and possibly might be of use to others.

Read more