Holding pharmaceutical companies and regulatory bodies to account
November 10, 2015
‘Medical Investigation provide a force for the public good, which require evidence-based methods.’
Carl Heneghan, Professor of EBM: Holding pharmaceutical companies and regulatory bodies to account with evidence. BMJ Medical Investigations Conference, Open Foundation, New York, November 12, 2015. Download: New York – Behind the headlines pdf
Spot the difference?
When you realize – on the same day – two UK national newspapers report on the same drug but come to exactly opposite conclusions; then there must be a problem. At the same time millions are denied life saving treatments – statins. Yet, they apparently stop a life saving treatment – the flu jab – working.
Therefore, to me at least, it is clear that confounding and bias are a major problem that is more or less inherent in many headlines – referred to as a new term from the day: ‘urban legends’.
In assessing health claims, generally, at the centre we start by analyzing three main issues: assessment of who the evidence applies to; the size of the effect and the quality of the evidence
For example, when a sports products claims, ‘allow damaged muscles to repair and recover faster,’ you should expect to find high quality evidence from a randomize trial. Assessment of the type of study that underpins health claims often reveals low quality evidence.
A 2011 study of health-related press releases from 20 major UK universities, published in the BMJ, reported 36% of press releases made exaggerated claims about human health from research carried out on animals. One third of studies can be debunked by simply checking if humans were involved or not.
A 2008, James Lind article by Michael B. Bracken  reports why animal experiments often do not translate into replications in human trials, reporting animal trials are a ‘poor predictor of human experience is not new.’ So, one third of headlines can be disregarded by asking if they involved humans.
‘Exaggeration in news is strongly associated with exaggeration in press releases.’
One tip: the study type can generally be determined by asking three main questions (as per the Figure Tree and see the linked CEBM page on study designs).
Q1. What was the aim of the study?
- To simply describe a population then (PO questions) it is a descriptive study.
- To quantify the relationship between factors (PICO questions) then it is a analytic study.
Q2. If analytic, was the intervention randomly allocated?
- Yes? RCT
- No? Observational study
For observational study the main types will then depend on the timing of the measurement of outcome, so our third question is:
Q3. When were the outcomes determined?
- Some time after the exposure or intervention? cohort study (‘prospective study’)
- At the same time as the exposure or intervention? cross sectional study or survey
- Before the exposure was determined? case-control study (‘retrospective study’ based on recall of the exposure)
You can read my Storify of the conference day, which highlights some of the important tweets and messages: