The impact of Cochrane Systematic Reviews : a mixed method evaluation of outputs from Cochrane Review Groups supported by the UK National Institute for Health Research
Background: There has been a growing emphasis on evidence-informed decision making in health care. Systematic reviews, such as those produced by the Cochrane Collaboration, have been a key component of this movement. The UK National Institute for Health Research (NIHR) Systematic Review Programme currently supports 20 Cochrane Review Groups (CRGs). The aim of this study was to identify the impacts of Cochrane reviews published by NIHR funded CRGs during the years 2007-11. Methods: We sent questionnaires to CRGs and review authors, interviewed guideline developers and used bibliometrics and documentary review to get an overview of CRG impact and to evaluate the impact of a sample of 60 Cochrane reviews. We used a framework with four categories (knowledge production, research targeting, informing policy development, and impact on practice/services). Results: A total of 1502 new and updated reviews were produced by the 20 NIHR funded CRGs between 2007-11. The clearest impacts were on policy with a total of 483 systematic reviews cited in 247 sets of guidance; 62 were international, 175 national (87 from the UK) and 10 local. Review authors and CRGs provided some examples of impact on practice or services, for example safer use of medication, the identification of new effective drugs or treatments and potential economic benefits through the reduction in the use of unproven or unnecessary procedures. However, such impacts are difficult to objectively document and the majority of reviewers were unsure if their review had produced specific impacts. Qualitative data suggested that Cochrane reviews often play an instrumental role in informing guidance although a poor fit with guideline scope or methods, reviews being out of date and a lack of communication between CRGs and guideline developers were barriers to their use. Conclusions: Health and economic impacts of research are generally difficult to measure. We found that to be the case with this evaluation. Impacts on knowledge production and clinical guidance were easier to identify and substantiate than those on clinical practice. Questions remain about how we define and measure impact and more work is needed to develop suitable methods for impact analysis.