Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Selective reporting of outcomes is just one type of reporting bias and there are a number of ways in which it can arise. In the previous blog we gave an example of the effect of selective reporting bias through under-reporting of data. There are other ways in which selective reporting of outcomes may arise. These are summarised in the Cochrane handbook but include:

  • Selective omission of outcomes from the published study;. results that are not significant are deliberately left out
  • Selective choice of data for an outcome;g. outcomes are taken at different time points and only some, usually those that show a favorable result, are reported
  • Selective reporting of different analyses using the same data;g. a study that is measuring changes in say kilograms of weight (a continuous variable) shows that the results have more impact when reported as BMI <25 or BMI >25 (dichotomous variables)
  • Selective reporting of subsets of the data;g. a study that plans on reporting the total number of strokes but ends up reporting only ischaemic strokes and not haemorrhagic ones
  • Selective under-reporting of data;g. a study that plans to report a specific outcome, for example a change in blood pressure, shows no differences between interventions, and the authors choose to avoid reporting the actual data and instead just state that the difference was “not significant”

Do you consider any of them when reading a paper?

Prevalence and impact of selective reporting

A 2009 study showed that in a cohort of registered trials, 31% (46 of 147) had some form of discrepancy between the outcomes registered and the outcomes published. Furthermore, when it could be assessed, the studies with changes were more likely to report a statistically significant result.  In a cohort of Cochrane systematic reviews, over a third were suspected to have at least one RCT containing selective outcome reporting bias. The authors also demonstrated the impact of this bias and found that selective outcome reporting can produce a median change in the treatment effect size of 39% (IQR 18% to 67%).

The authors of a 2015 systematic review of studies that examined selective outcome reporting found 27 relevant inclusions. The median proportion of trials with an identified discrepancy between the registered and published primary outcome was 31% (although there was large variability between them). Four studies observed outcome changes in more than 50 % of trials.

Bottom line, selective reporting of outcomes is still very much around.

Reducing selective reporting of outcomes

Recently The Compare Project was launched. The aim is to prospectively audit the presence of outcome switching in all RCTs published in the top five medical journals (NEJM, JAMA, The Lancet, Annals of Internal Medicine, BMJ). The outcomes presented in each published trial are compared with the stated outcomes in the clinical trial registry and protocol. When any outcome switching is noted, a letter is sent to the journal editors to inform them. As stated in the project approach: “Through increased awareness of misreported outcomes, individual accountability, and feedback for specific journals, we hope to fix this ongoing problem”. The results are being posted and updated live, and as I write 61 trials have been checked and only 9 were deemed perfect. Look and see if the results have changed.

But the long term “fix” to reduce selective reporting of outcomes shouldn’t actually be that difficult. Reviewers and editors should make it routine practice (and many do) to check that the published outcomes match those in the trial protocol or registry (e.g. the ISRCTN or ClinicalTrials.gov). If the protocol is not available it should be requested directly from the corresponding author. If all that is not possible (which would be a concern in itself), then as a reader, simply checking the outcomes reported in the methods section with the outcomes reported in the results section of the published article should be a minimum.

The CONSORT statement is a minimum set of recommendations for the reporting of randomized trials to increase transparency. The recommendation specifically includes the following:

Outcomes 6a Completely defined pre-specified primary and secondary outcome measures, including how and when they were assessed
6b Any changes to trial outcomes after the trial commenced, with reasons

 

 

So cross-checking of CONSORT statements completed by authors is also good practice, to ensure accurate reporting.

However, although many journals endorse the CONSORT statement, not all enforce it.

Trialists should be ensuring that all data are as transparent as humanly possible. Open access journals and open repositories for data deposition aim to ensure transparency and access to all data. There will be little place to hide should you choose otherwise.

But the bottom line, if you want to be a good experimenter, is that if you say you’re going to do something: (a) do it, (b) show that you did it, and (c) show how you did it.