From a tweet I just came across the following article at the UK’s NHS Choices website. It raises doubts about the predictive value of a new test for Alzheimer’s disease, published in a paper here. The model aims to predict whether those suffering from mild cognitive impairment will progress to Alzheimer’s disease (AD) in the following year.

The NHS Choices article states

An important point is that, while the test accuracy rate of 87% sounds impressive, it may be a large overestimate of what would happen in reality.

Given real world assumptions on the proportion of people who have mild cognitive impairment that progress to Alzheimer’s disease (10-15%), the predictive ability of the test falls to around 50% – no better than a coin toss.

In my opinion (and evidently someone else who has commented similarly on the NHS Choices page), this is utter nonsense! That the positive predictive value of the test is 50% does not mean that the predictive ability of the test is the same as a coin toss (which has zero predictive ability).

To make clear what I mean, assume, as the NHS Choices article does (they assume between 10% and 15%), that the prevalence of AD in the given population is 15%. The sensitivity of the coin toss test is the probability of the test being positive (say heads) given that a patient has (or will have) AD. Since the coin test outcome is statistically independent (completely unrelated) to the (future) disease status of a patient, the sensitivity is just 0.5. Similarly the specificity (probability of tails, given no AD), is 0.5. The positive predictive value of the coin toss test is then

P(AD|+) = P(+|AD)P(AD) / {P(+|AD)P(AD) + P(+|no AD)P(no AD)}

= 0.5*0.15 / (0.5*0.15+0.5*0.85)

= 0.15

Before doing a test, the prior probability of a patient having AD is identical to the overall prevalence of 0.15. The positive predictive value of the coin toss test is also 0.15 – this a consequence of the coin test having no predictive value.

In contrast, the test proposed by the authors of this paper is reported to have sensitivity and specificity of 0.85 and 0.88 respectively. This results in a positive predictive value of

P(AD|+) = P(+|AD)P(AD) / {P(+|AD)P(AD) + P(+|no AD)P(no AD)}

=0.85*0.15/(0.15*0.85+0.85*0.12)

=0.56

Thus using the proposed test, the prior probability of AD is increased, upon receiving a positive test result, to 0.56. While I would agree that the 87% accuracy figure quoted might be wrongly interpreted by someone to mean that if they get a positive test result they have a 0.87 chance of getting AD, to say that the proposed test has the same predictive ability as a coin toss is plainly incorrect.

This is really interesting – and quite worrying!

It seems to me that it could motivate a more general post on Bayes’ Theorem (as utilised above) to avoid similarly incorrect reasoning in other situations?

Thanks Mike. Yes that’s a good idea. It is indeed very easy (for example) to wrongly interpret the sensitivity (P(+ test result | disease)) as the positive predictive value (P(disease|+ test result)). I’ll add it to my list of posts to write!