Scolaris Content Display Scolaris Content Display

Cochrane Database of Systematic Reviews Protocol - Intervention

Knowledge translation interventions for facilitating evidence‐informed decision‐making amongst health policymakers

This is not the most recent version

Collapse all Expand all

Abstract

Objectives

This is a protocol for a Cochrane Review (intervention). The objectives are as follows:

To determine the effectiveness of knowledge translation strategies aimed at facilitating evidence‐informed public health decision making by managers and policy‐makers.

Background

Evidence‐informed decision making is a term increasingly used to acknowledge that decisions are informed by a spectrum of evidence rather than relying on singular sources (Oxman 2009). Like evidence‐informed decision making more broadly, evidence‐informed public health involves a complex, multidisciplinary process (Ciliska 2008) and acknowledges the need to consider the best available evidence in the context of the public health issues of concern (Bowen 2005). Evidence‐informed public health decision making operates in a complex environment and is influenced to varying degrees by research, community views, political climate, stakeholder pressure and institutional (including budgetary) constraints (Bowen 2009; Lavis 2002; Lavis 2006a; NCCMT n.d.; Oxman 2009). The term knowledge translation (KT) is increasingly used in public health research, policy and practice settings to describe the processes needed to facilitate evidence‐informed decision making (Armstrong 2006). Within this process, or series of processes, knowledge goes through an iterative pathway of exchange, synthesis and application with the involvement of researchers and users of knowledge. While governments, non‐government organisations and others involved in public health decision making fund the development of KT programs or strategies designed to facilitate evidence‐informed decision making, little is known about the effectiveness of these interventions. KT interventions are becoming increasingly popular in public health. This popularity has been fuelled by an ever‐increasing research evidence base, demands on managers and policymakers to use this research evidence, and the challenges associated with using this evidence to inform decisions. As a result, there has been a proliferation of KT strategies designed (by both researchers and decision‐makers) to facilitate evidence‐informed public health decision making.

Description of the intervention

KT has been defined as "the exchange, synthesis and ethically sound application of knowledge ‐ within a complex system of interactions among researchers and users ‐ to accelerate the capture of the benefits of research for [citizens] through improved health, more effective services and products, and a strengthened health care system" (CIHR 2006). Often used in conjunction or in place of KT, knowledge exchange (KE) is described as "collaborative problem‐solving between researchers and decision makers that happens through [a process of] linkage and exchange" (CHSRF 2006). The Canadian Health Services Research Foundation hypothesises that "effective knowledge exchange involves interaction between decision‐makers and researchers and results in mutual learning through the process of planning, producing, disseminating, and applying existing or new research in decision‐making" (CHSRF 2006). Many terms are used to describe KT‐related strategies including evidence‐based or evidence‐informed decision making, research utilization, innovation diffusion, knowledge transfer, research dissemination, research implementation, research uptake, knowledge exchange, and mobilisation. Whilst there may be differences in the ways these terms are used (Estabrooks 2006), for the purposes of this review we will generally refer to these activities as KT.

The categorisation of KT interventions is complex given the multitude of approaches taken to support its implementation. This review focusses specifically on KT interventions designed to increase the contribution of research evidence to evidence‐informed public health decision making. While research can be defined broadly, for the purposes of this review it is defined as “information derived from evaluation research that has assessed the effects and outcomes of potential interventions and programs” (Rychetnik 2004).

These interventions can be simple and they can comprise multiple components. Nutley, Walter and Davies (Nutley 2007) have proposed five mechanisms which form the basis of our interventions of interest.

Dissemination: circulating or presenting research findings to potential users, in formats that may be more or less tailored to their audience

Interaction: developing stronger links and collaborations between the research and policy or practice communities

Social influence: relying on influential others, such as experts and peers, to inform individuals about research and to persuade them of its value

Facilitation: enabling the use of research through technical, financial, organizational and emotional support

Incentives and reinforcement: using rewards and other forms of control to reinforce appropriate behaviour

Incorporating components of these components, KT efforts have been categorised as having push, pull and exchange foci (Lavis 2006b).

Push efforts generally focus on dissemination. They may include the development and distribution of publications, reports, systematic reviews, evidence summaries and online materials. Interventions may support the uptake or reach of these products (Lavis 2006b; Nutley 2007).

Pull efforts may involve a number of mechanisms including social influence, facilitation and incentives and reinforcements. These may involve training staff in the application of research to decision making, employment of knowledge brokers (also referred to as facilitators and information specialists) within decision‐making contexts, rapid‐response units, and development of project templates that instruct staff to provide rationale for their activities (Lavis 2006b; Nutley 2007).

Exchange efforts focus on improving the interactions between the researchers and decision‐makers. This may include the establishment of networks or formal partnerships to support evidence‐informed decision making, prioritisation efforts (where decision‐makers identify their priorities, turn the questions into researchable questions and promote research into these questions), and the use of knowledge brokers where their role is to facilitate partnership development or knowledge translation and exchange (rather than to simply assist with making sense of research evidence for decision‐makers as identified above) (Lavis 2006b; Lomas 2000).

These interventions can be (a) driven by researchers, (b) driven by decision‐makers or (c) designed to develop partnerships between researchers and organisations to support the use of research evidence in public health decision making.

Researchers, in this context, can be described as individuals or groups, or both, who are involved in the conduct of primary (for example intervention) or secondary (for example synthesis or secondary data analysis) research. These researchers are often employed within university environments but may also work in government departments and non‐government organisations. Decision‐makers are defined as those who are employed within governments or local authorities. In this review we are particularly interested in public health decision‐makers, that is practitioners with executive or managerial responsibilities, or policy‐makers who operate at local, regional, state, national and international levels (Brownson 2009).

How the intervention might work

The role that research evidence plays in this process is often difficult to determine (Jewell 2008). Research evidence can influence which issues capture decision‐makers' attention (agenda setting), which policy and programmatic options are considered and how they are characterised, and how a preferred option can best be implemented. Moreover, in any of these roles research evidence can be used instrumentally, where research guides a specific decision or is used to solve a particular problem; conceptually to change the way that the issue is seen; or symbolically to justify an approach to addressing a particular issue (Lavis 2003; Weiss 1979).

While a number of models have been developed to explain the relationships between researchers and decision‐makers, few models describe how KT activities might facilitate uptake of knowledge (Estabrooks 2006). Based on a review of the literature and existing frameworks, Graham and colleagues have proposed a 'knowledge to action' model which incorporates knowledge creation and action (Graham 2006). Knowledge creation involves knowledge inquiry, knowledge synthesis and knowledge tools or products. At an action level, Graham et al suggest that knowledge use is facilitated through eight phases, to:

  • identify a problem that needs addressing; identify, review and select the knowledge or research relevant to the problem (e.g. research evidence);

  • adapt the identified knowledge or research to the local context;

  • assess barriers to using the knowledge;

  • select, tailor and implement interventions to promote the use of knowledge (i.e. implement the change);

  • monitor knowledge use;

  • evaluate the outcomes of using the knowledge; and

  • sustain ongoing knowledge use (Graham 2006).

This review is focused particularly on the selection, tailoring and implementation of interventions that promote the use of knowledge to inform decisions.

Whilst the Graham model is useful in identifying the potential points of intervention in the ‘knowledge to action’ cycle, a conceptual framework developed by Bowen and Zwi considers these issues within the context of the policy process (Bowen 2005). The key points of intervention in this framework are sourcing the evidence, using the evidence and considering the capacity to implement (Bowen 2005). This framework also considers the policy influences and the contextual and decision‐making factors. While the framework does not consider knowledge generation beyond the development of a policy idea, and is limited in the detail accorded to the influences on decision making, it identifies how evidence may be used within the decision‐making process. It therefore needs to be considered in the context of the barriers and facilitators to evidence‐informed decision making.

In understanding these barriers and facilitators, four commonly cited challenges in linking evidence to policy include the following (Lavis 2009).

1. Research evidence competes with many other factors in the policy‐making process (e.g. institutional constraints, interest group pressure, citizens' values, and other types of information like politicians' and civil servants' past experiences).

2. Research evidence is not valued enough by policy‐makers as an information input.

3. Research evidence is not relevant to the policy issues that policy‐makers face.

4. Research evidence is not easy to use, which may be as a result of one or more of the following factors:

  • research is not communicated effectively (e.g. policy‐makers hear 'noise instead of music' coming from the research community);

  • research evidence is not available within the urgent timelines in which policy‐makers typically work, or in a form that they can readily use;

  • policy‐makers lack mechanisms to prompt them to use research evidence in policy‐making (e.g. policy‐makers can get caught up in the urgency of policy‐making processes without asking whether and how research evidence could support different stages of what is a highly dynamic and iterative process); and

  • policy‐makers and researchers don't create opportunities for issue‐focused discussions that are informed by research evidence and by the tacit knowledge brought to the table by those who will be involved in, seek to influence and be affected by decisions in a given domain.

Many options could be chosen to address these challenges (Lavis 2009), however a systematic review of the effectiveness of these options for public health decision‐making is needed to support their selection.

Why it is important to do this review

In 2007, Mitton and colleagues attempted to synthesise the research in this area (Mitton 2007). At this time (search completed 2006) no rigorously evaluated intervention studies were located (Mitton 2007). An update of this review process with more transparent definitions, search methods and quality appraisal is warranted. Given the emergent interest in this topic by researchers and decision‐makers, this review will bring together the emerging body of research evidence and will be useful in informing the selection and implementation of knowledge translation strategies in public health decision making contexts.

Objectives

To determine the effectiveness of knowledge translation strategies aimed at facilitating evidence‐informed public health decision making by managers and policy‐makers.

Methods

Criteria for considering studies for this review

Types of studies

Cluster randomised controlled trials (CRCTs) (with at least three control and three intervention clusters), randomised controlled trials (RCTs), quasi‐experimental studies (quasi‐RCTs) or controlled before and after studies where people are allocated to control and intervention groups using non‐randomised methods (Reeves 2008), and interrupted time series studies (ITS) (with at least three measures before and after the intervention) will be considered for inclusion in this review.

In addition, given that this is a new and emerging area of research, secondary supplementary analysis will be conducted to examine the body of KT research beyond these study designs. This will include studies of KT interventions that have been implemented and evaluated using either quantitative or qualitative methods and where evaluation methods and outcomes are clearly described. More specifically, this would include uncontrolled studies (beyond interrupted time series studies for example post‐test only evaluation studies), case reports and qualitative studies. These studies will be included in a separate section of the review results. A similar approach has been employed in a previously conducted review of specialist outreach clinics in primary care and rural hospital settings that is published on The Cochrane Library (Gruen 2004).

Types of participants

Participants include those who make management or executive or policy‐level decisions about public health programs and services as well as the governance, financial and delivery arrangements that support these programs and services. These people are commonly referred to in the literature as public health decision‐makers, public health policy‐makers, policy analysts or public health managers. They may be employed within government at the local, regional, state or federal or national level. Public health practitioners who are making decisions about individual clients will be excluded. While non‐government organisations are often involved in policy development, these policies may be operational or there may be substantial contextual barriers to the implementation of these policies. Due to this level of complexity, these organisations will be excluded from this review.

This review will focus on public health decision‐makers that seek to improve health and other outcomes at the population level rather than targeted at individuals. It is anticipated that these decision‐makers will have a particular focus on the social determinants of health. Studies that include both clinical and population‐level decision‐makers will be included if the results for these two groups can be separated. Studies carried out in high income and in low income settings will be considered for inclusion.

Types of interventions

Interventions to be included in the review are those designed to facilitate the use of research evidence in public health decisions. As discussed above, this review will include three categories of intervention: researcher‐driven interventions (typically push), decision‐maker driven interventions (typically pull) and interventions which represent meaningful partnerships between the researchers and decision‐makers (typically exchange).

This review will exclude interventions that are not specifically designed to support evidence‐informed decision making, for example generic skill development initiatives or project evaluations that may include steering groups with membership drawn from policy and research environments but lack a focus on supporting evidence‐informed decision making specifically. However, information on these intervention types will be extracted from included studies, where relevant, as they may provide important contextual data.

These interventions may be compared to usual practice or nothing, for example knowledge brokering delivered in one region compared to a control or comparison region with no intervention. In some cases, interventions may compare two interventions (for example knowledge brokering versus targeted messages) or may be multifaceted (for example comparing knowledge brokering delivered in combination with evidence briefings and compared with usual practice, nothing or another set of interventions).

Types of outcome measures

Primary outcomes

The primary effectiveness outcome will be broadly defined as the use of research evidence at either an individual behavior level (that is systematic use of research evidence measured by behaviours) or at a decision‐making level (that is research informed the conceptual approach to problem definition, or research informed the identification or characterisation of policy options or implementation strategies). Thus any outcome that considers the construct use or implementation of research evidence will be applicable as the primary outcome and papers will need to include this outcome to be included in the review. This may include instrumental, conceptual or symbolic use of research evidence (Beyer 1997). These types of outcome could be measured via audit or document review. Key informant interviews and surveys may be used to collect self‐report data. Where more than one variable is relevant, the one referred to as primary (in the paper) will be considered. In cases where it is impossible to decide which outcome variable is the primary outcome, the one available from most studies will be the preferred outcome.

Secondary outcomes

Secondary outcomes will include:

  • perceived influence (as opposed to use) of research evidence on public health decision making;

  • intention to use research evidence;

  • increase in awareness and knowledge of research evidence and sources of research evidence to inform public health decision making; and

  • increase in confidence or capacity to use research evidence to inform public health decision making;

  • cost of the intervention.

Adverse effects

If harmful effects are reported in the primary studies, data will be extracted on those and reported in the review.

Search methods for identification of studies

We will attempt to include all relevant studies, both published and unpublished, with no restrictions.

Electronic searches

We will search the following electronic databases: CENTRAL, MEDLINE, EMBASE, CINAHL, ProQuest (dissertations and theses), Social Policy and Practice, Sociological Abstracts, Scopus, HMIC, Applied Social Sciences Index and Abstracts, Trophi, Public Health Specialist Collection (NHS Evidence), and Web of Science (including both the Science Citation Index and Social Science Citation Index). We will also search the Cochrane Public Health Group (CPHG) Specialise Register and the Effective Practice and Organisation of Care Group (EPOC) Specialised Register. Systematic reviews and reviews of reviews will be identified using the Database of Abstracts of Reviews of Effectiveness (DARE).

The EPOC accepted study design filter will be used to search for studies in combination with subject headings and free‐text terms more specific to the topic area. It is important that the search remains broad given that KTE is an emerging field of interest and the terminology is reasonably diverse. Search strategies from previously conducted reviews were also used to build the search strategy (Innvaer 2002; Lavis 2005; Mitton 2007).

The following outline of the MEDLINE search will be modified for each database.

MEDLINE knowledge translation terms

1. (knowledge adj2 (application or broke$ or creation or diffus$ or disseminat$ or exchang$ or implement$ or management or mobili$ or translat$ or transfer$ or uptak$ or utili$)).ti,ab.

2. (evidence$ adj2 (exchang$ or translat$ or transfer$ or diffus$ or disseminat$ or exchang$ or implement$ or management or mobil$ or uptak$ or utili$)).ti,ab.

3. (KT adj2 (application or broke$ or diffus$ or disseminat$ or decision$ or exchang$ or implement$ or intervent$ or mobili$ or plan$ or policy or policies or strateg$ or translat$ or transfer$ or uptak$ or utili$)).ti,ab.

4. (research$ adj2 (diffus$ or disseminat$ or exchang$ or transfer$ or translation$ or application or implement$ or mobil$ or transfer$ or uptak$ or utili$)).ti,ab.

5. ("research findings into action" or "research to action" or "research into action" or "evidence to action" or "evidence to practice" or "evidence into practice").ti,ab.

6. ("research utilis$ or research utiliz$" and ("decision mak$" or decisionmak$ or decision‐mak$ or "policy mak$" or "policy‐mak$" or "policy decision$" or "health$ polic$" or practice or action$1)).ti,ab.

7. technology transfer.ti,ab. or technology transfer/ [ML]

8. Diffusion of Innovation/ or (diffusion adj2 innovation).ti,ab.

9. (("systematic review$" or "knowledge synthes$") adj5 ("decision mak$" or "policy mak$" or "policy decision?" or "health polic$")).ti,ab.

10. (("systematic review$" or "knowledge synthes$") adj2 (application or implement$ or utili?ation or utilize? or utilise? or utili?ing)).ti,ab.

11. research utili?ation.ti,ab.

12. (evidence base$ or evidence inform$) adj5 (decision$ or plan$ or policy or policies or practice or action$)

13. or/1‐12

 

MEDLINE  public health terms

14. Public Health/

15. Public Health Administration/

16. Public Health Practice/

17. Community Health Services/ or Community health planning/

18. community health$.ti,ab.

19. Health Promotion/

20. health promotion?.ti,ab.

21. exp public policy/ [ML includes Health Policy]

22. (public adj2 (health$ or policy$ or policies)).ti,ab.

23. exp health planning/ and (decision? or decision‐mak$ or policy or policies).ti,ab,hw.

24. health management

25. or/13‐24

 

Randomised controlled trial (RCT) study design (filter)

26. (randomized controlled trial or controlled clinical trial or clinical trial).pt.

27. random$.ti,ab.

28. controlled.ti.

29. (control$ adj (clinical or group$ or trial$ or study or studies or design$ or method$)).ti,ab.

30. control groups/

31. single‐blind method/ or double‐blind method/

32. or/26‐31

 

EPOC Accepted Study Designs (Filter)

32. (intervention? or multiintervention? or multi‐intervention? or postintervention? or post‐intervention? or preintervention? or pre‐intervention?).ti,ab.

33. ("pre test$" or pretest$ or posttest$ or "post test$").ti,ab.

34. (control$ adj2 (before or after)).ti,ab.

35. ("quasi‐experiment$" or quasi experiment$ or "quasi random$" or quasirandom$ or "quasi control$" or quasicontrol$ or ((quasi$ or experimental) adj3 (method$ or study or studies or trial or design$))).ti,ab.

36. ("time series").ti,ab,hw.

37. or/32‐36

 

Additional study designs (filter)

38. "multicenter study".pt.

39. (multicent$ adj2 (design? or study or studies or trial?)).ti,ab.

40. cross‐sectional studies/ or cross‐sectional study/

41. (cross‐sectional adj2 (design or study or studies or trial? or survey or questionnaire)).ti,ab.

42. case‐control studies/ or case control study/

43. (("case control " or multicase or multi‐case) adj2 (design? or study or studies or trial?)).ti,ab.

44. follow‐up studies/ or follow up/

45. (("follow up" or follow‐up) adj2 (design or study or studies)).ti,ab.

46. cross‐over studies/ or crossover procedure/

47. ((crossover or cross‐over) adj2 (design or study or studies or trial)).ti,ab.

48. pilot projects/ or pilot study/

49. (pilot$ adj2 (project? or study or studies)).ti,ab.

50. Comparative study.pt. or comparative study/

51. (comparative adj2 (study or studies)).ti,ab.

52. intervention studies/ or intervention study/

53. program evaluation/

54. evaluation studies.pt.

55. ("evaluation study" or "evaluation studies").ti,ab.

56. ((Process or program$) adj3 (effect$ or evaluat$)).ti.ab

57. follow‐up assessment.ti.ab

58. or/38‐57

 

EPOC intervention terms (filter)

59. (audit or self‐audit).ti,ab,hw.

60. "barrier? and facilitator?".ti,ab.

61. (booklet$ or brochure? or pamphlet? or paper‐based or "printed material?").ti,ab.

62. decision making/ or decision mak$.ti,ab,hw.

63. ((change? or changing or improv$ or effect$ or influenc$ or alter$ or adapt$ or amend$ or modify$ or adjust$ or transform$) adj2 (policy or policies or process$ or practic$ or provider? or activit$)).ti,ab.

64. ((knowledge or evidence or quality or research or practice) adj2 gap?).ti,ab.

65. (education$ adj3 (continuing or group? or outreach or or plan$ or practitioner? or program? or staff? or team?)).ti,ab,hw.

66. ("evidence based" adj3 (algorithm? or evaluat$ or guideline? or healthcare or implement$ or improv$ or intervention$ or management or pathway? or plan? or practic$ or program? or quality)).ti,ab.

67. (feedback not (feedback adj loop$)).ti,hw.

68. Guideline Adherence/

69. (guideline? adj3 (adher$ or enforc$ or influenc$ or implement$ or impact$ or introduc$ or uptake or follow)).ti,ab.

70. (incentiv$ adj2 (economic or employee? or financ$ or insurer? or insurance or market$ or monetar$ or pay$ or plan? or practitioner? or program$ or provider? or reimburs$ or salary or salarie? or staff or team$ or value‐based)).ti,ab.

71. (collaborat$ or "cross‐profession$" or intraprofession$ or intra‐profession$ or interprofession$ or inter‐profession$ or (skill adj2 mix$) or teambase? or team‐based or inter disciplin$ or multidisciplin$ or multi disciplin$ or multiprofession$).ti,ab,hw.

72. ((knowledge adj2 (transfer$ or translation or shar$ or exchan$)) or KT).ti,ab.

73. ((knowledge or evidence or practice) adj2 (gap? or barrier?)).ti,ab.

74. ((knowledge or evidence) adj2 synthesis).ti,ab.

75. "opinion leader?".ti,ab.

76. (outreach adj2 (communit$ or plan? or program? or visit?)).ti,ab.

77. ((policy or policies) adj2 (chang$ or effect? or impact? or influenc$)).ti,ab.

78. (quality adj2 (assurance or improvement? or initiativ$ or plan$ or program$ or review or audit)).ti,ab.

79. (QI adj (inititative? or intervention? or program$ or plan$ or audit)).ti,ab.

80. ("user computer" or "computer user").ti,ab.

81. computers, handheld/ or handheld?.ti,ab. or (PDA or "personal data assistant?" or blackberr$).ti,ab.

82. telephon$.ti,ab,hw. or (tele‐health or tele‐medicine or e‐health).ti,ab.

83. internet.ti,ab,hw. or (intranet or LAN or WAN or blog$ or (computer$ adj2 network$) or online$ or web$ or wiki).ti,ab.

84. Social marketing/ or "social marketing".ti,ab.

85. "virtual communit$".ti,ab.

86. ((change? or changing or improv$ or effect$ or influenc$) adj2 (policy or policies or practic$ or provider?)).ti,ab.

87. ("performance based" or value‐based).ti,ab.

88. (journal club or clinical librarian or library or libraries or "answer service$" or "information science")

89. or/59‐88

84. 32 or 37 or 58

85. 12 and 25 and 90

86. 12 and 25 and 89

Searching other resources

Grey literature

We will undertake the following searches to identify unpublished studies.

  • Search websites of key organisations (e.g. Canadian Health Services Research Foundation, UK Department of Health, Health Technology Assessment, National Institute of Health Research, Medical Research Council, Evidence for Policy and Practice Information and Coordinating Centre (EPPI‐Centre), Centre for Evidence Based Medicine, National Institute for Health and Clinical Excellence, Centre for Reviews and Dissemination).

  • Run keyword searches in Google and Google Scholar.

  • Relevant conference proceedings (e.g. Cochrane Colloquia, Campbell Colloquia, Canadian Cochrane Contibutors Meetings).

  • Web‐based clinical trial registries (e.g. National Library of Medicine, Controlled Clinical Trials).

Handsearching

We will handsearch Evidence and Policy and Implementation Science (from 2000) as citations are not currently indexed in MEDLINE.

Reference lists

Reference lists of included studies will be examined for additional papers to be considered for inclusion. Web of Science will also be used to identify papers which have cited included studies.

Correspondence

Contact with experts in the field of KT (as identified by author team) and authors of identified studies will be made to supplement our documented search strategy.

Data collection and analysis

Selection of studies

Eligibility will be determined by the inclusion and exclusion criteria listed above.The abstracts and titles of articles retrieved from each search will be scanned independently by two review authors to assess eligibility. If there is disagreement between these two authors, a third author will be called on to help resolve the differences of opinion. Full copies of: (a) eligible papers and (b) papers where further information is required to assess eligibility will be retrieved, for closer examination. Studies which appear to be relevant but do not meet the inclusion criteria will be listed in the table of excluded studies with reasons for their exclusion.

Data extraction and management

Data will be independently extracted by two review authors. Data extraction forms will be modelled on the CPHG and EPOC data extraction forms. The outcome measures will either be a dichotomous outcome extracted as number of events (for example number of users of research evidence) out of the total observed (N), or as a continuous measure extracted as the observed mean (or median) change from baseline with its corresponding standard deviation (or estimated from any reported dispersion measure). These will initially be typed into a custom‐made Excel spreadsheet and subsequently entered into Review Manager (RevMan) [Computer program]; Version 5.0. These will be checked by a second review author.

Additional items to be extracted (when available) will include:

  • type of public health decision‐maker (level of jurisdiction or catchment area, type of organisation);

  • area of public health specialty;

  • theoretical underpinning of the intervention;

  • evidence‐base provided to justify intervention approach;

  • description of intervention components (number of components, mode of delivery, format, duration etc);

  • the cost of the intervention.

Assessment of risk of bias in included studies

Risk of bias will be assessed using the Cochrane 'Risk of bias' tool (Higgins 2008a). This includes the assessment of sequence generation; allocation concealment; blinding of participants, personnel and outcome assessors; incomplete outcome data; selective outcome reporting; and other sources of bias (Higgins 2008a) and will be used to assess the risk of bias of RCTs and cluster RCTs. This will be supplemented with the EPOC 'Risk of bias' guidance to assess the risk of bias of non‐randomised studies (Effective Practice and Organisation of Care 2009). In addition, for all included studies we will use a series of questions drawn from the Effective Public Health Practice tool for quality assessment to assess study internal and external validity (Effective Public Health Practice Project n.d). This approach is important as the potential complexity of studies warrants investigation into factors beyond those explored in the Cochrane 'Risk of bias' tool. Complexities include the increased likelihood that these studies will be prone to bias; and the importance of assessing and reporting on the external validity, fidelity and sustainability of interventions. Risk of bias will be assessed independently by two review authors. Resolution of any discrepancies in the quality ratings will firstly occur through discussion between the two review authors. If necessary an additional review team member will act as a third party and assist in resolution.

The risk of bias of studies included in the supplementary analysis will not be assessed using the methods described above as they will not be considered as included studies. However, alternative and more simple methods will be used to assess the quality of these studies. Assessing the quality of uncontrolled studies (excluding ITS) studies is complex given that they are by nature prone to bias. There are no agreed criteria for assessing the quality of such studies (Reeves 2008). This review will use criteria developed for a similar review (Mitton 2007).While this quality assessment tool used a scoring scale, scoring has been identified as being problematic and not supported by empirical evidence (Higgins 2008a).Therefore, each paper will not be scored butrather comments on their adherence to each domain will be made.

1.      Literature review: directly related, recent literature is reviewed and research gap(s) identified.

2.      Research questions and design: a priori research questions are stated and are appropriate; and an hypothesis, a research purpose statement, or a general line of inquiry is outlined. A study design or research approach is articulated and is appropriate to the study question.

3.      Population and sample: the setting, target population, participants, and approach to sampling are outlined in detail and are appropriate to the study question.

4.      Data collection and capture: key concepts, measures or variables are defined and no gaps are apparent. A systematic approach to data collection is undertaken and reported. Response or participation rate and completeness of information capture are adequate and appropriately reported.

5.      Analysis and results: an approach to analysis and a plan to carry out that analysis are specified and appear appropriate to the study design. Results are clear and comprehensive. Conclusions follow on logically from the findings.

Measures of treatment effect

Dichotomous outcomes will be reported as proportions (that is within‐group response rates) and in any two‐group comparison these will be handled based on their response ratios (risk ratios) (RR). Continuous outcome measures will be reported as changes from baseline and in any two‐group comparison these will be handled as the difference in mean values; however, as we anticipate that many different scales will apply, measuring the same construct, we will use standardised mean differences (SMD) as a relative measure for the continuous outcomes. Ordinal outcome measures may be analysed as either dichotomous or continuous data and reported in either of the above formats. Thus any quantitative reporting will be anticipated as applicable for interpretation.

Unit of analysis issues

In some cases, where studies randomise or allocate clusters but do not account for clustering in their analysis, potential unit of analysis errors may occur. To deal with these potential errors, we will (where possible) contact the original authors and request missing information and then attempt to re analyse studies as outlined in the Cochrane Handbook for Systematic Reviews of Interventions (Higgins 2008b). Where re‐analysis occurs, we will clearly mark the new results as 're‐analysed'. We will also note where re‐analysis was not possible. These considerations will all be scutinised and evaluated by the biostatistician.

Dealing with missing data

We will attempt to contact the lead authors of primary studies (by email) to locate missing data. All missing outcome data for included studies will be captured on the data extraction form and reported in the risk of bias table. The study will be excluded from the review if insufficient information on the primary outcome of concern is obtained (due to inability to contact authors, lost or unavailable data). Reasons for exclusion will be given in the table 'Characteristics of excluded studies'.

Assessment of heterogeneity

We will not attempt to combine results from different study designs in an overall meta‐analysis. These will all be presented in separate subgroups in the same forest plot (that is with no overall diamond). Heterogeneity will be assessed through a visual assessment and logic‐based assessment of study differences. We will conduct a standard Q‐test statistic for heterogeneity and evaluate the heterogeneity via the I2 statistic, which can be interpreted as the amount of inconsistency in the reported results between the individual studies as well as individual strata (Higgins 2002; Higgins 2003).

Assessment of reporting biases

Formal statistical methods for assessing publication bias may not be appropriate given heterogeneity in the included study designs. If more than 10 studies are identified, publication bias will be explored using funnel plots. These plots will help to assess the relationship between effect size and study precision.

Data synthesis

The relative measures will apply for the evidence synthesis. We will combine the estimates according to study design; that is we will pool estimates for each stratum (by design of the study) but we will not combine data across different types of design. These will be meta‐analysed using a random‐effects model as the default method. Inconsistency across strata will be assessed using the I2 statistic. Where meta‐analysis is not possible, median effect sizes will be presented (with a range).

Narrative synthesis will supplement this analysis and will help to explore intervention processes. These will be important for grading the evidence, to consider the inconsistency and imprecision of the body of evidence. Narrative synthesis will be guided by the items noted in 'Data extraction and management'.

We will explore the ability to examine issues of equity using the Cochrane and Campbell Equity Checklist for Review Authors (Ueffing 2009), working with the Campbell and Cochrane Equity Methods Group. We will also describe all the findings from analysis of cost related data, with the hope of providing recommendations useful for policy and practice decision making.

A summary of the results of the data synthesis and assessment of the quality of the evidence will be included in a summary of findings table for the main comparison. The quality of the body of evidence for each individual outcome will be assessed using the GRADE approach. We will consider the risk of bias of the included studies, directness of the evidence, heterogeneity, precision of the effect estimates; and the risk of publication bias; as well as dose‐response relationships, absence of all plausible confounders, and the magnitude of the effect. The quality of the body of evidence for each individual outcome will be graded from 'High' to 'Low' quality (see Chapter 12, Cochrane Handbook).

Subgroup analysis and investigation of heterogeneity

Subgroup analyses will be conducted, if possible, to explore heterogeneity according to the following a priori subgroups.
1. Focus on intervention(s) (e.g. researcher‐driven interventions, decision‐maker driven interventions and interventions which represent meaningful partnerships between the researchers and decision‐makers).

2. Type of intervention(s): specific KT activities (e.g. knowledge brokering, training, evidence summaries).
3. Type of decision‐making setting (e.g. level of government).

4. Equitable distrubution of effect (e.g. effect in lower middle‐income countries (LMIC) or resource‐poor settings as opposed to high‐income settings).

We will also perform a number of analyses stratifying the available studies according to their design characteristics (see 'Types of studies'). We will manually examine graphs for outliers and differences between studies.

Sensitivity analysis

If relevant, all the random‐effects analyses will also be conducted using a fixed‐effect model to assess whether the meta‐analyses are subject to small‐study bias.