All health researchers should begin their training by preparing at least one systematic review

KM 2

Kamal Mahtani

Here is the pre publication script (version 2) of an article published in the Journal of the Royal Society of Medicine (version 3) where Kamal Mahtani argues that carrying out a systematic review should be an essential part of health researcher training. Kamal will be chairing a session entitled “Improving the Evidence for Systematic Reviews” at Evidence Live 16 (taking place between 22nd June to 24th June 2016).

One of the founding principles of evidence based medicine is to use the best available evidence to inform decisions made by patients and clinicians.[1] Systematic reviews have made significant contributions to the pool of best available evidence by systematically gathering, appraising and summarising evidence to answer a given health-care question.[2] Indeed, the current UK Chief Medical Officer, Dame Sally Davies, has emphasized that “by removing uncertainties in science and research, systematic reviews ensure that only the most effective and best value interventions are adopted by the NHS and social care providers”.[3]

The value of systematic reviews in health care

One of the largest collections of systematic reviews can be found in the Cochrane library, where reviews are periodically updated to reflect new research.[4] The well-known Cochrane logo depicts a real example of the value of systematically pooling data for meta-analysis, in this case demonstrating the clear benefit of corticosteroids in accelerating maturation in preterm babies.[5,6] There is now a proliferation of examples of the significant impact that systematic reviews have had on clinical practice. These include anticoagulation in preventing ischaemic stroke in patients with atrial fibrillation (7), the diagnostic clinical features of serious infections in children [7], and the prognostic value of smoking cessation even after diagnosis of early stage lung cancer. [8]

As well as providing benefits, systematic reviews can protect patients from harm. Despite growing evidence from the 1960s, a significant proportion of parents were still placing their babies to sleep on their front, an activity that extended into the 1990s.[9][10] In 2005 a systematic review of observational studies showed a more than four-fold increase in deaths associated with the prone position compared with sleeping supine.[11] This conclusion could (and should) have been demonstrated, with statistical certainty, 35 years before had a systematic search and pooling of available evidence been carried out – it has been estimated that tens of thousands of cot deaths could have been prevented had this been done.[9] As another example,

Individual studies of rosiglitazone, used to treat type 2 diabetes, failed to pick up an increased risk of myocardial infarction associated with the drug. This became apparent from a systematic review, which ultimately led to withdrawal of the drug from the European market, despite its having been available for over 10 years before this. [12][13][14]

Systematic reviews reduce research waste

Clinical needs and finite budgets dictate priorities in clinical research, and systematic reviews can reduce research waste. It has been estimated that as much as 85% of research investment is being avoidably “wasted”. [15] This estimate was based on the knowledge that about 50% of clinical trials are never published.[16,17] Of the remaining 50%, at least 25% are not sufficiently clear, complete, and accurate for others to interpret, use, or replicate. Of the final 25%, only about half will have been designed and executed well enough to have confidence in using their results in making clinical decisions.[18] A recent series of articles has highlighted the steps to which researchers can go to reduce this waste and increase the value of their work.[19] One of the recommendations was that new research should not be undertaken before systematic assessment of what is already known or being researched. If it is possible that the research question can be answered adequately using existing evidence, there should be no need to carry out a comparatively expensive new clinical trial that is simply not needed rather than a much less expensive evidence synthesis.[20]

Unnecessary research not only wastes resources but, more importantly, can harm patients. This is powerfully illustrated by the technique of cumulative meta-analysis. Whereas a traditional forest plot may order studies alphabetically or chronologically, a cumulative meta-analysis will generate a new summary effect size and confidence interval each time a new study is added to the pool.[21,22] Lau and colleagues applied this technique to clinical trials of streptokinase treatment for acute myocardial infarction, antibiotic prophylaxis to reduce peri-operative mortality for colorectal surgery, and endoscopic treatment of upper gastrointestinal bleeding; in each case they showed that evidence for efficacy would have been apparent, through systematic assessment, years before it was suspected.[21] Their examples illustrate how patients enrolling into clinical trials, after efficacy could have been demonstrated, were being denied potentially lifesaving interventions. To prevent this, the authors recommended that a new meta-analysis be conducted each time data from a new trial becomes available. This would be the “best way to utilize the information that can be obtained from clinical trials to build evidence for exemplary medical care.” The point was further emphasised by Antman et al., who demonstrated that recommendations made by experts e.g. in medical textbooks, frequently lagged behind meta-analytical evidence from pooling RCTs.[23]

Nearly ten years after that recommendation, there are still examples of how failures to heed that advice have repeatedly led to patient harm. Rofecoxib (Vioxx), which was originally marketed as a safer alternative to existing non-steroidal anti-inflammatory drugs, was withdrawn from the market in 2004 after concerns emerged of an increased risk of cardiovascular events, notably myocardial infarction.[24] A systematic review of published clinical studies of rofecoxib, conducted before the September 2004 withdrawal, identified 18 randomised controlled trials, all sponsored by the manufacturer.[25] Cumulative meta-analysis of these trials showed that had a systematic review and meta-analysis of accumulating evidence been conducted by the end of 2000, it would have been clear that rofecoxib was associated with a higher incidence of myocardial infarction (Figure 1). Several thousands of participants in the studies conducted after 2000 were randomised into trials when a clear harm could (and should) already have been detected.

Clinical trials should begin and end with a systematic review

Identifying or carrying out a systematic review before embarking on any new primary research is becoming seen by research funders as an essential early step. One of the largest funders of research in the UK is the NHS National Institute for Health Research (NIHR), which makes the production and promotion of systematic reviews a key investment in its infrastructure.[3,26] Prospective applicants to NIHR funding are now recommended to ensure that all proposals for primary research are supported by the findings of systematic reviews of the relevant existing literature.[27] This may include identifying relevant existing systematic reviews or carrying out an appropriate review and summarising the findings for the application.[28] Researchers who identify a clear need for new studies should use information gained from their systematic review to inform the design, analysis, and conduct of their study. This is an essential part of the “Adding Value in Research Framework.” (Figure 2) which builds on previous work to reduce research waste.[15]

An analysis of trials funded during 2013 by the NIHR Health Technology Assessment (HTA) programme, showed that all of them had been informed by one or more systematic reviews. [29] The reasons that a systematic review was used varied, but by far the most common reason was to justify treatment comparisons. Other reasons included obtaining information about adverse events; defining outcomes, and other aspects of study design, such as recruitment and consent (Table 1).

While the prevalence of authors referring to a systematic review in the rationale for a new clinical trial has improved, the same cannot be said for integration of new trial data in updated systematic reviews. Clarke and colleagues identified randomized trials in five general medical journals in May 1997 (n=26), May 2001 (n=33), May 2005 (n=18), and May 2009 (n=29) and found that there had been no notable improvement over time in the extent to which authors interpreted their new data in the context of up to date systematic reviews. [30,31]

Opportunities for improvement

There are scientific, ethical and economic reasons for considering a systematic review before and after embarking on further primary research. While there has been some progress in universally adopting these principles, there are still large opportunities for improvement. For this to be accelerated the following recommendations should be considered. First, all health researchers should be encouraged and supported to complete at least one relevant systematic review, at the start of their training. However, there should be conditions to this. These include that the review should seek to answer a relevant and needed question. Health researchers should also ensure that their review is entered into an international prospective register of systematic reviews, such as PROSPERO.[32] Researchers conducting their first systematic review should be supervised by more experienced reviewers and information specialists to ensure that the process of conducting the review is not done in isolation and instead acts as a high quality training opportunity. Furthermore, the type of review should ideally reflect their future planned work, whether quantitative or qualitative. These steps alone should ensure that there is value, rather than waste, in this recommendation. Obvious candidates for this are those embarking on doctoral training schemes, and it should be the responsibility of funders of these schemes and their host institutions to support this activity. There are several learning advantages to providing this training early on. A systematic review will offer inexperienced researchers the opportunity to gain transferable research skills that will provide significant value throughout their career. Examples of these skills include how to formulate a relevant research question, how to search for evidence, familiarity with a variety of study designs, critical appraisal skills that tease out the internal and external validity of a study and assess for quality and bias, data synthesis, and an ability to discuss the implications of the findings for both future research and clinical practice. Researchers keen on then carrying out a prospective primary study can use the skills, and the results, gained, from carrying out their review to design and inform the future study. Subsequent skills; such as how to recruit participants into a trial or how to complete an ethics application, are likely to be better informed from first having a systematic understanding of the existing literature. Supporting this training will build capacity and capability in a field short of systematic reviewers, a vision shared by funders such as the NIHR.[3][33]

Secondly, funders of clinical trials should make it a prerequisite of awards that investigators not only use a systematic review to inform their trial but complete their trial with a demonstration of how the new data add to the existing evidence. This would not necessarily have to be in the form of a full systematic review (although this would be advantageous), but could take the form of a brief review, something that may become easier as automated technology responds to this need.[34]

Finally, journals and more specifically peer-reviewers (i.e. the research community) must give greater attention to ensuring that authors of any new published clinical trials follow recommendations to provide readers with sufficient information to assess how the new evidence fits in with existing evidence.[35] Although the CONSORT checklist includes a recommendation that authors should discuss whether their “interpretation [is] consistent with [their] results, balancing [the] benefits and harms, and considering other relevant evidence”,[36] it is possible that the need to place the new data in the context of a systematic review needs to be more explicit. Despite the many improvements in reporting of randomised controlled trials that CONSORT statement has brought, completeness of reporting still remains sub-optimal.[37]

Conclusions

Judging the point at which justified replication is needed before it becomes wasteful duplication will often be challenging. And like many aspects of scientific research, there are no guarantees that using a systematic review to inform or contextualise a new trial will lead to better trials.

Nevertheless, the history of clinical research contains numerous examples of failures to consider, conduct, and use systematic reviews, which have caused patients to be exposed to potential harms, as well as wasted resources in carrying out unnecessary clinical trials.[38] Equipped with the correct training, researchers who apply for funding of any new primary study, or indeed on completion of their work, should ensure that they incorporate a systematic review before and after their clinical trial. It would be “ethically, scientifically, and economically indefensible” not to.[20]

Cite as: Mahtani KR. All health researchers should begin their training by preparing at least one systematic review. J R Soc Med Published Online First: 26 April 2016. 

doi:10.1177/0141076816643954

Acknowledgements

KRM is supported though an NIHR Clinical Lecturer fellowship. I am grateful for helpful comments from Iain Chalmers, Jeffrey Aronson and Meena Mahtani

Disclaimer

The views expressed are those of the author and not necessarily those of the NHS, the NIHR or the Department of Health.

References

  1. Sackett DL, Rosenberg WMC, Gray JAM, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn’t. BMJ. 1996 Jan 13;312(7023):71–2.
  2. Egger M, Smith GD, Altman D. Systematic reviews in health care: meta-analysis in context. John Wiley & Sons; 2008.
  3. National Institute for Health Research (NIHR). Systematic Reviews knowledge to support evidence-informed health and social care [Internet]. Available from: http://www.nihr.ac.uk/documents/about-NIHR/NIHR-Publications/NIHR-Systematic-Reviews-Infrastructure.pdf
  4. Cochrane | Trusted evidence. Informed decisions. Better health. [Internet]. [cited 2016 Jan 28]. Available from: http://www.cochrane.org/
  5. Our logo | Cochrane [Internet]. [cited 2016 Jan 28]. Available from: http://www.cochrane.org/about-us/our-logo
  6. Roberts D, Dalziel SR. Antenatal corticosteroids for accelerating fetal lung maturation for women at risk of preterm birth. In: The Cochrane Library [Internet]. John Wiley & Sons, Ltd; 2006 [cited 2016 Jan 28]. Available from: http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD004454.pub2/abstract
  7. Aguilar MI, Hart R. Oral anticoagulants for preventing stroke in patients with non-valvular atrial fibrillation and no previous history of stroke or transient ischemic attacks. In: Cochrane Database of Systematic Reviews [Internet]. John Wiley & Sons, Ltd; 2005 [cited 2016 Jan 28]. Available from: http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD001927.pub2/abstract
  8. Bruel AV den, Haj-Hassan T, Thompson M, Buntinx F, Mant D. Diagnostic value of clinical features at presentation to identify serious infection in children in developed countries: a systematic review. The Lancet. 2010 Mar 6;375(9717):834–45.
  9. Parsons A, Daley A, Begh R, Aveyard P. Influence of smoking cessation after diagnosis of early stage lung cancer on prognosis: systematic review of observational studies with meta-analysis. BMJ. 2010 Jan 22;340:b5569.
  10. Evans I, Thornton H, Chalmers I, Glasziou P. Testing Treatments. 2011 [cited 2016 Jan 30]; Available from: http://www.ncbi.nlm.nih.gov/books/NBK66204/
  11. Ottolini MC, Davis BE, Patel K, Sachs HC, Gershon NB, Moon RY. Prone Infant Sleeping Despite the Back to Sleep Campaign. Arch Pediatr Adolesc Med. 1999 May 1;153(5):512–7.
  12. Gilbert R, Salanti G, Harden M, See S. Infant sleeping position and the sudden infant death syndrome: systematic review of observational studies and historical review of recommendations from 1940 to 2002. Int J Epidemiol. 2005 Jan 8;34(4):874–87.
  13. Choices NHS. Sudden infant death syndrome (SIDS) – NHS Choices [Internet]. 2015 [cited 2016 Jan 28]. Available from: http://www.nhs.uk/Conditions/Sudden-infant-death-syndrome/Pages/Introduction.aspx
  14. Effect of rosiglitazone on the frequency of diabetes in patients with impaired glucose tolerance or impaired fasting glucose: a randomised controlled trial. The Lancet. 2006 Sep 23;368(9541):1096–105.
  15. Cohen D. Rosiglitazone: what went wrong? BMJ. 2010 Sep 6;341:c4848.
  16. Nissen SE, Wolski K. Rosiglitazone Revisited: An Updated Meta-analysis of Risk for Myocardial Infarction and Cardiovascular Mortality. Arch Intern Med. 2010 Jul 26;170(14):1191–201.
  17. Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. The Lancet. 2009 Jul 4;374(9683):86–9.
  18. Ross JS, Tse T, Zarin DA, Xu H, Zhou L, Krumholz HM. Publication of NIH funded trials registered in ClinicalTrials.gov: cross sectional analysis. BMJ. 2012 Jan 3;344:d7292.
  19. 2015| 20th August. Half of all clinical trials have never reported results [Internet]. AllTrials. [cited 2016 Jan 29]. Available from: http://www.alltrials.net/news/half-of-all-trials-unreported/
  20. BMJ Blogs: The BMJ » Blog Archive » Paul Glasziou and Iain Chalmers: Is 85% of health research really ‘wasted’? [Internet]. [cited 2016 Jan 29]. Available from: http://blogs.bmj.com/bmj/2016/01/14/paul-glasziou-and-iain-chalmers-is-85-of-health-research-really-wasted/
  21. Macleod MR, Michie S, Roberts I, Dirnagl U, Chalmers I, Ioannidis JPA, et al. Biomedical research: increasing value, reducing waste. The Lancet. 2014 Jan 11;383(9912):101–4.
  22. Chalmers I, Bracken MB, Djulbegovic B, Garattini S, Grant J, Gülmezoglu AM, et al. How to increase value and reduce waste when research priorities are set. The Lancet. 2014 Jan 11;383(9912):156–65.
  23. Lau J, Schmid CH, Chalmers TC. Cumulative meta-analysis of clinical trials builds evidence for exemplary medical care. J Clin Epidemiol. 1995 Jan;48(1):45–57.
  24. Cochrane Handbook for Systematic Reviews of Interventions. In [cited 2016 Jan 29]. Available from: http://handbook.cochrane.org/chapter_11/11_3_2_1_forest_plots_in_revman.htm
  25. Antman EM, Lau J, Kupelnick B, Mosteller F, Chalmers TC. A Comparison of Results of Meta-analyses of Randomized Control Trials and Recommendations of Clinical Experts: Treatments for Myocardial Infarction. JAMA. 1992 Jul 8;268(2):240–8.
  26. Krumholz HM, Ross JS, Presler AH, Egilman DS. What have we learnt from Vioxx? BMJ. 2007 Jan 18;334(7585):120–3.
  27. Jüni P, Nartey L, Reichenbach S, Sterchi R, Dieppe PA, Egger M. Risk of cardiovascular events and rofecoxib: cumulative meta-analysis. The Lancet. 2004 Dec 10;364(9450):2021–9.
  28. National Institute for Health Research (NIHR). About NIHR [Internet]. [cited 2016 Jan 29]. Available from: http://www.nihr.ac.uk/about/
  29. National Institute for Health Research (NIHR). Guidance notes for applicants that ensure all primary research is informed by a review of the existing literature [Internet]. Available from: http://www.nets.nihr.ac.uk/__data/assets/pdf_file/0006/77217/Guidance-notes_literature-review.pdf
  30. National Institute for Health Research (NIHR) Research Design Service London. Conducting a brief systematic style review in support of a primary research application [Internet]. Available from: http://www.rds-london.nihr.ac.uk/RDSLondon/media/RDSContent/files/PDFs/Systematic-Reviews-in-Support-of-Primary-Research-Applications.pdf
  31. Cooper NJ, Jones DR, Sutton AJ. The use of systematic reviews when designing studies. Clin Trials. 2005 Jan 6;2(3):260–4.
  32. Bhurke S, Cook A, Tallant A, Young A, Williams E, Raftery J. Using systematic reviews to inform NIHR HTA trial planning and design: a retrospective cohort. BMC Med Res Methodol. 2015 Dec 29;15(1):1.
  33. Clarke M, Hopewell S, Chalmers I. Reports of clinical trials should begin and end with up-to-date systematic reviews of other relevant evidence: a status report. J R Soc Med. 2007 Jan 4;100(4):187–90.
  34. Clarke M, Hopewell S, Chalmers I. Clinical trials should begin and end with systematic reviews of relevant evidence: 12 years and waiting. The Lancet. 2010 Jul 3;376(9734):20–1.
  35. Systematic Review Fellowships [Internet]. [cited 2016 Feb 1]. Available from: http://www.nihr.ac.uk/funding/systematic-review-fellowships.htm
  36. Tsafnat G, Dunn A, Glasziou P, Coiera E. The automation of systematic reviews. BMJ. 2013 Jan 10;346:f139.
  37. The EQUATOR Network | Enhancing the QUAlity and Transparency Of Health Research [Internet]. [cited 2016 Feb 4]. Available from: http://www.equator-network.org/
  38. Consort – Welcome to the CONSORT Website [Internet]. [cited 2016 Jan 29]. Available from: http://www.consort-statement.org/
  39. Turner L, Shamseer L, Altman DG, Schulz KF, Moher D. Does use of the CONSORT Statement impact the completeness of reporting of randomised controlled trials published in medical journals? A Cochrane review a. Syst Rev. 2012 Nov 29;1(1):1.
  40. Clarke M, Brice A, Chalmers I. Accumulating Research: A Systematic Account of How Cumulative Meta-Analyses Would Have Provided Knowledge, Improved Health, Reduced Harm and Saved Resources. PLOS ONE. 2014 Jul 28;9(7):e102670.
Kamal Mahtani

About Kamal Mahtani

Kamal R. Mahtani is a GP and and Clinical Lecturer at the University of Oxford

View more posts by Kamal Mahtani

One comment on “All health researchers should begin their training by preparing at least one systematic review

  1. Caroline Struthers

    I agree. I also think everyone affected by health research (ie. everyone!) should be involved in a systematic review as part of mainstream and continuing education. Early career researchers could get used to working with patients and the public on systematic reviews before they start their primary research career in earnest. There would then be more chance that their reviews and their subsequent primary research will look at patient-relevant questions and outcomes.

    I also believe that every systematic review should end with a protocol. If clinical trials should begin and end with a systematic review, so all systematic reviews should end with a protocol for a clinical trial. If patients and the public are involved in the systematic review they can be involved development of the protocol as well and their insight will then certainly help ensure the trial measures patient-relevant outcomes.

Leave a Reply to Caroline Struthers Cancel reply

Your email address will not be published. Required fields are marked *