The Centre for Evidence-Based Medicine develops, promotes and disseminates better evidence for healthcare.

July 19, 2018

**Getting the wrong end of the stick: The Prosecutor’s Fallacy**

We may all be guilty of it at some time or other

*The probability that this nurse’s shifts would coincide with so many deaths and resuscitations by chance is 1 in 342 million, so she must be guilty.
*

Kathy Taylor, medical statistician

The three statements above have one thing in common – they are all examples of The Prosecutor’s Fallacy (1). This is a logical error involving conditional probabilities – a measure of the__ __chance, likelihood, or probability of X when Y has happened, Y being something that modifies the chance. The Prosecutor’s Fallacy can be avoided by making sure the probability answers the right question, by focusing on how the evidence applies to the ‘defendant’ and not on the ‘evidence’ alone in the absence of other relevant factors.

The Prosecutor’s Fallacy is most often associated with miscarriages of justice. It’s when the probability of innocence given the evidence is wrongly assumed to equal an infinitesimally small probability that that evidence would occur if the defendant was innocent. Consequently, highly improbable innocent explanations have led to the assumption of guilt, as with the murder convictions of Sally Clark in 1999 and Lucia de Berk in 2003 (2).

Leonard Mlodinow’s doctor made the same logical error when he gave an alarming prognosis back in 1989 (3). His doctor had misinterpreted the 1 in 1000 probability that the HIV test will produce a positive result when the blood is not infected (the false positive rate), interpreting it as the probability that his blood was not infected if it gave a positive test result. Only 1 in 10,000 people from a low-risk population who are tested is eventually confirmed as being infected, but because the false positive rate is 1 in 1000, 10 people in 10,000 will test positive who aren’t infected, compared with 1 who is. Thus, the odds that Mlodinow was infected was 10 to 1 against. The fact that he was from a low-risk population suggested that he was unlikely to be infected.

The third example above (“He mustn’t love me any more…”) reflects someone wrongly assuming that, given a delayed response to their message, the probability that they have lost favour is equal to the high probability of a delayed response that would occur if they had lost favour, whereas there is higher probability that there is some other reason for the delay. Variants of this example, which lead to misunderstandings between friends, families or colleagues make it the most common example of the Prosecutor’s Fallacy.

In these examples, the conditional probabilities are inverted, but doing this ignores both the alternative explanations and the associated probability of “guilt” (disease or lost favour) before the new evidence (the result of a test or delayed response) occurs. In the courtroom, this assumed (or prior) probability is the probability of guilt or innocence based on all the other evidence, and in the medical context, it’s the prevalence of the disease. Ignoring the base rate is a common error (4). Bayes’ theorem (5) shows how the two conditional probabilities are related in updating the prior probability following the addition of new information:

Therefore, the Prosecutors Fallacy is a subtle error that requires careful thought to understand, which makes teaching it all the more challenging. Visual explanations may be helpful.

The difference between the two conditional probabilities can be illustrated by considering the results of a diagnostic test in one million people, at two levels of prevalence, and using blocks with areas scaled to represent the number of people (shown in black). The test has 98% sensitivity and a false positive rate of 1% (shown in red).

The first conditional probability is *P(Positive test result*|*No disease), *which is the false positive rate. It is the same for (a) and (b):

The numerator is the number of people who have no disease and have tested positive (9998 and 8000 for (a) and (b) respectively). The denominator is the total number with no disease irrespective of their test result (labelled No Disease). This probability does not vary according to the prevalence.

The second conditional probability is *P(No disease*|*Positive test result)*, i.e. the chance that you don’t have the disease, even though the test is positive:

The numerator is the same, as before, but the denominator is the number of people in the whole sample with a positive test result (shaded area). This probability depends on the prevalence and also on the relative number of those with positive tests, with and without the disease.

Notice that at 0.02% prevalence the two conditional probabilities differ by 97% but at 20% prevalence the difference is only 3%. Therefore, the Prosecutor’s Fallacy is not an issue when the prevalence (or prior likelihood of guilt) is high, because the conditional probabilities are similar.

A simpler version of the Prosecutor’s Fallacy arises when a defendant shares the physical characteristics of the perpetrator of the crime, and the probability of innocence given this match – *P(Innocent*|*Match) – *is wrongly assumed to equal the infinitesimally small chance of a random person in the population sharing those characteristics – *P(Match)*. These probabilities are not equal, as shown above by Bayes’ theorem. Unlike the previous examples, conditional probabilities are not inverted, but like the previous examples, this logical error arises from answering the wrong question, by focusing purely on the evidence and not on the evidence as it relates to the defendant, including other factors that may modify the likelihoods.

References

- Thompson WC, Schumann EL. Interpretation of statistical evidence in criminal trials: The prosecutor’s fallacy and defense attorney’s fallacy. Law and Human Behaviour 1987; 11:167-187
- Hubert L, Wainer H. A statistical guide for the ethically perplexed. CRP Press, Taylor & Francis Group. 2013.
- Mlodinow L. The drunkard’s walk: how randomness rules our lives. Allen Lane. 2008
- Tversky, A.; Kahneman, D. (1974). “Judgment under Uncertainty: Heuristics and Biases”.
*Science*.**185**(4157): 1124–1131. - Altman D. Practical statistics for medical research. Chapman & Hall. 1993

Kathy receives funding from the NHS National Institute for Health Research (NIHR) Programme for Applied Research. The views expressed are those of the author and not necessary of the NHS or the NIHR.

*Want to learn how we teach statistics and other key topics in Evidence-Based HealthCare then join us at our annual teaching course 10 – 13 September 2018. More details here.*