Recently my colleague Ruth Keogh and I had a paper published: 'Bayesian correction for covariate measurement error: a frequentist evaluation and comparison with regression calibration' (open access here). The paper compares the popular regression calibration approach for handling covariate measurement error in regression models with a Bayesian approach. The two methods are compared from the frequentist perspective, and one of the arguments we make is that frequentists should more often consider using Bayesian methods.

# Jonathan Bartlett

## Prediction intervals after random-effects meta-analysis

Christopher Partlett and Richard Riley have just published an interesting paper in Statistics in Medicine (open access here). They examine the performance of 95% confidence intervals for the mean effect and 95% prediction intervals for a new effect in random-effects meta-analysis.

## Why you shouldn't use propensity score matching

I've just watched a highly thought provoking presentation by Gary King of Harvard, available here https://youtu.be/rBv39pK1iEs, on why propensity score matching should not be used to adjust for confounding in observational studies. The presentation makes great use of graphs to explain the concepts and arguments for some of the issues with propensity score matching.

## Confidence intervals for the hazard ratio in RCTs which agree with log rank test

The log rank test is often used to test the hypothesis of equality for the survival functions of two treatment groups in a randomised controlled trial. Alongside this, trials often estimate the hazard ratio (HR) comparing the hazards of failure in the two groups. Typically the HR is estimated by fitting Cox's proportional hazards model, and a 95% confidence interval is used to indicate the precision of the estimated HR.

## Multiple imputation for informative censoring R package

Yesterday the Advanced Analytics Centre at AstraZeneca publicly released the InformativeCensoring package for R, on GitHub. Standard survival or time to event analysis methods assume that censoring is uninformative - that is that the hazard of failure in those subjects at risk at a given time and who have not yet failed or been censored, is the same as the hazard at that time in those who have been censored (with regression modelling, this assumption is somewhat relaxed).

## Estimands and assumptions in clinical trials

Today I listened to a great Royal Statistical Society webinar, with Alan Phillips and Peter Diggle (current RSS president) presenting. The topic was a particularly hot one in the clinical trials world right now, namely estimands.

Alan's presentation gave an excellent overview of the work of a PSI/EFSPI special interest group on estimands. Topics discussed included defining exactly what is meant by an estimand, whether there should be a standardised set of estimands which could be used across trials conducted in different disciplines, and what the estimand discussion means in terms of implementation and statistical analysis.