The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×

Abstract

Objective

Policy makers have increasingly turned to learning collaboratives (LCs) as a strategy for improving usual care through the dissemination of evidence-based practices. The purpose of this review was to characterize the state of the evidence for use of LCs in mental health care.

Methods

A systematic search of major academic databases for peer-reviewed articles on LCs in mental health care generated 421 unique articles across a range of disciplines; 28 mental health articles were selected for full-text review, and 20 articles representing 16 distinct studies met criteria for final inclusion. Articles were coded to identify the LC components reported, the focus of the research, and key findings.

Results

Most of the articles included assessments of provider- or patient-level variables at baseline and post-LC. Only one study included a comparison condition. LC targets ranged widely, from use of a depression screening tool to implementation of evidence-based treatments. Fourteen crosscutting LC components (for example, in-person learning sessions, phone meetings, data reporting, leadership involvement, and training in quality improvement methods) were identified. The LCs reviewed reported including, on average, seven components, most commonly in-person learning sessions, plan-do-study-act cycles, multidisciplinary quality improvement teams, and data collection for quality improvement.

Conclusions

LCs are being used widely in mental health care, although there is minimal evidence of their effectiveness and unclear reporting in regard to specific components. Rigorous observational and controlled research studies on the impact of LCs on targeted provider- and patient-level outcomes are greatly needed.

Recently, a tremendous emphasis has been placed on the integration of evidence-based practices into routine mental health care. Substantial budget cuts to mental health funding at the state and national levels have forced policy makers to seek out efficient and effective ways to scale up training in evidence-based practices (1). States, counties, and national organizations have turned to learning collaboratives (LCs) as a method for large-scale training with ongoing support.

This collaborative approach has clearly become a priority in the field. The Substance Abuse and Mental Health Services Administration recently issued a call for applications for State Adolescent Treatment Enhancement and Dissemination grants totaling $30 million over three years to help states develop “learning laboratories” focused on shared provider experiences during the implementation of new evidence-based practices (2). Similarly, through the National Council for Community Behavioral Healthcare, 35 states are now using LCs to change health care provider practices (personal communication, Salerno A, July 2012). LCs represent a significant investment in the field as a potentially viable approach to large-scale implementation and dissemination of new treatment practices. However, there has been little research on the effectiveness of LCs for evidence-based practices in mental health care.

LCs as they are implemented in mental health care are adapted from quality improvement collaborative (QIC) models used in health care. One of the most widely cited and adapted QIC models is the Breakthrough Series (BTS) collaboratives of the Institute for Healthcare Improvement (IHI) (39). The quality improvement processes at the core of the IHI and other approaches are rooted in industrial improvement practices and the work of W. Edwards Deming and Joseph Juran, statisticians who advocated for process improvement driven by ongoing data collection and analysis and an assumption of workers’ interest in learning and improvement (1012).

Although some evidence exists for the effectiveness of QICs in health care, there is a need for rigorous research in this area. A systematic literature review by Schouten and colleagues (13) identified nine controlled studies of health care QICs and concluded that the QICs showed promise in changing provider practices. However, the authors found less evidence in support of an impact on patient-level outcomes. Although the review included two randomized controlled trials (RCTs), a majority of the studies used matched control sites or compared administrative data from similar sites in a larger provider network.

Building on these findings, a more recent review included 24 articles, with the goal of updating the original literature review and developing a deeper understanding of the core components of QICs as they are reported in the literature (14). This review included additional RCTs (five distinct studies); however, as with the earlier review (13), a vast majority of studies used matched controls. Of the 14 crosscutting components identified as common ingredients, the QICs reported including, on average, six or seven components—most commonly, in-person learning sessions, plan-do-study-act (PDSA) cycles, multidisciplinary quality improvement teams, and data collection for quality improvement. As in the earlier review (13), outcome data suggested that the greatest impact of the QICs was on provider-level process-of-care variables; patient-level findings were less robust. Because of the imprecise reporting on specific components, it was not possible to link any specific components with improved care.

Of note, neither of these systematic reviews included collaboratives focused on mental health issues because when they were undertaken there had been no controlled studies targeting mental health care. LCs have been applied in mental health care to a wide range of practices, including the process of care (for example, engagement in services, care integration, and use of a screening tool) (1518) and implementation of complex evidence-based practices (7,19). The focus on evidence-based practices is notable given the complexity of the patient outcomes and the substantial skill development required of providers.

This systematic literature review focused on peer-reviewed studies of mental health LCs that included any patient or provider pre-to-post outcome data. Given the differences between mental health and general health care settings in terms of their structure, types of interventions and patient issues addressed, and data systems available, there is a critical need for a better understanding of how LCs are implemented in mental health care. The primary goal of this review was to identify the components of LCs as reported in mental health studies and to characterize the existing data on LCs (for example, patient-level data, reports of changed provider practices, and analyses of feasibility or acceptability in real-world care settings).

Methods

This literature search on LCs focused on individual empirical articles published from January 1995 to October 2013. The database search included Ovid MEDLINE, ProQuest, PsycINFO, and PubMed. Search terms included “learning collaborative,” “quality improvement collaborative,” “Breakthrough Series,” and “NIATx.” These terms were refined after several preliminary searches and are similar to those used in earlier reviews (13,14). NIATx (Network for the Improvement of Addiction Treatment) was included in order to capture the NIATx process improvement approach used in the substance abuse literature, which draws on conceptual models similar to the predominant approach to collaboratives, specifically the IHI’s BTS (20).

Articles that met inclusion criteria were peer reviewed, written in English, and included a pre- and postintervention comparison of the impact of an LC. To define LCs in mental health care, we searched the theoretical literature on QICs (35,9,2124) and reviewed the definition used by Schouten and colleagues (13). We then conducted informational interviews with a subset of LC purveyors to elicit more detail. This review defined LCs as organized, structured group learning initiatives in which organizers took the following steps: convened multidisciplinary teams representative of different levels of the organization; focused on improving specific provider practices or patient outcomes; included training from experts in a particular practice or in quality improvement methods; included a model for improvement with measurable targets, data collection, and feedback; engaged multidisciplinary teams in active improvement processes wherein they implemented “small tests of change” or engaged in PDSA activities; and employed structured activities and opportunities for learning and cross-site communication (for example, in-person learning sessions, phone calls, and e-mail Listservs) (3,57,9,25,26). We assessed the ways in which the 14 components identified by Nadeem and colleagues (14), including in-person learning sessions, phone meetings, data reporting, feedback, training in quality improvement methods, and use of process improvement methods, were reported in these studies.

Two authors (EN and LCH) reviewed all abstracts generated by the initial search to select articles that merited a full-text review. These two authors also reviewed each article retrieved to determine whether it met final inclusion criteria. In the event of a discrepancy or if inclusion was unclear, the two authors conferred with the other authors to make a final determination. Once article selection was finalized, each article was coded by using a standardized table to summarize study details (for example, targets for improvement, study design, setting, study sample, and LC components). A primary coder was assigned to each article, and a secondary coder reviewed the primary coder’s work. Disagreements were resolved by consensus.

The initial search generated 421 unique articles across several disciplines (primarily mental health, education, and health care). From a review of the 421 abstracts, 52 were determined to be related to mental health or substance abuse, 28 of which met criteria for full-text article review (that is, they appeared to be focused on learning collaboratives). Articles were excluded after the full-text review if they did not report any pre-post LC quantitative data. After a review of those articles and their references, 20 articles were selected for final inclusion (7,1518,2741). [A figure illustrating article selection is included in an online data supplement to this article.]

Results

The 20 articles selected for inclusion encompass 16 distinct studies. Table 1 provides a summary of the study type, LC model, and LC components reported in each study. Table 2 provides definitions of the study characteristics and LC components tracked in this review. The LC features were categorized into components, quality improvement processes, and organizational involvement. LC components refer to LC features that constituted the structure of the model. Quality improvement processes include available details about PDSAs and other quality improvement activities. The organizational involvement section includes indicators of the ways in which the LC penetrated various levels of the organization.

Table 1 Twenty articles representing 16 studies of learning collaboratives (LCs) included in the reviewa
LC componentc
Quality improvementd
Organizational penetratione
Article information
Prework
ArticleTargetModelbSampleLengthExpert panelODCIn-person learning sessionsPDSAsMultidisciplinary QITsQIT phone callsE-mail or Web supportNew QI dataQI data reviewExternal support with data reviewInvolved leadersExternal training for non-QIT staffQIT training for non-QIT staff
Cavaleri et al., 2006 (15)fMental health service use and evidence-based engagement strategiesBTS12 mental health agencies9 monthsUnclearYesYesUnclearYesYesNoUnclearUnclearNoYesUnclearNo
Cavaleri et al., 2007 (27)fMental health service use and evidence-based engagement strategiesBTS9 mental health agencies (9 of the 12 mental health agencies from Cavaleri et al. [15])9 monthsUnclearYesYesUnclearYesYesNoUnclearUnclearNoYesUnclearNo
Cavaleri et al., 2010 (16)Mental health service use and evidence-based engagement strategiesBTS5 mental health agencies (4 experimental, 1 did not implement any engagement strategies)9 monthsUnclearYesYesUnclearYesYesNoUnclearUnclearNoYesNoYes
Duffy et al., 2008 (32)Use of a depression assessment in psychiatric practicesBTS, CCM19 psychiatric practices (2 practices dropped out without completing data collection)12 monthsNoYesYesYesYesYesNoYesNoNoUnclearNoYes
Ebert et al., 2012 (7)Use of Trauma-Focused Cognitive Behavioral Therapy in community practice settingsBTS11 mental health agencies (2 agencies did not complete)18 monthsYesYesYesYesYesYesYesYesNoNoYesNoNo
Epstein et al., 2008 (28)gAdherence to guidelines for evidence-based assessment and treatment of attention-deficit hyperactivity disorder (ADHD)CCM19 practices: 65 pediatricians and 19 family physiciansUnclearUnclearNoYesYesYesNoNoYesYesYesYesNoNo
Epstein et al., 2010 (29)gAdherence to guidelines for evidence-based assessment and treatment of ADHDCCM47 practices: 142 pediatricians and11 family physiciansUnclearUnclearNoYesYesYesNoNoYesYesYesYesNoNo
Epstein et al., 2010 (30)gAdherence to guidelines for evidence-based assessment and treatment of ADHDCCM31 pediatric practices: 123 pediatricians (data from family physicians were excluded)UnclearUnclearNoYesYesYesNoNoYesYesYesYesNoNo
Gustafson et al., 2013 (34)Time to treatment, client retention, and new patient recruitment in addiction centersNIATx201 addiction treatment centers18 monthsUnclearNoYesYesUnclearYesYesYesYesYesYesNoNo
Haine-Schlagel et al., 2013 (37)Attendance engagement in community-based early childhood intervention programsBTS4 developmental services programs within a children’s hospital: 29 providers (2 providers did not complete, and 1 was added after initiation)9 monthsYesNoYesYesYesYesYesNoNoNoYesNoNo
Katzelnick et al., 2005 (17)Implementation of the chronic care model for depression treatment in primary health careBTS, CCM20 health care organizations (3 teams did not complete)13 monthsNoNoYesYesUnclearYesYesYesUnclearYesYesNoYes
Hoffman et al., 2008 (36)hTime to treatment and client retention in outpatient, intensive outpatient, or residential addiction treatment unitsNIATxSecond cohort: 11 addiction treatment agencies (10 outpatient, 4 intensive outpatient units)18 monthsYesYesYesYesUnclearYesYesYesYesYesYesNoNo
McCarty et al., 2007 (35)hTime to treatment and client retention in outpatient, intensive outpatient, or residential addiction treatment unitsNIATxFirst cohort: 13 addiction treatment agencies (7 outpatient, 4 intensive outpatient, 4 residential units)18 monthsYesYesYesYesUnclearYesYesYesYesYesYesNoNo
Meredith et al., 2006 (31)Depression treatment in primary careBTS, CCM17 mental health agencies13 monthsNoNoYesYesYesYesNoUnclearNoNoNoNoYes
Roosa et al., 2011 (19)Client retention in chemical dependency treatment and client access to mental health servicesNIATxChemical dependency collaborative: 4 treatment agencies (1 did not complete); mental health collaborative: 6 treatment agencies27 monthsNoNoYesYesYesNoNoNoNoNoYesUnclearNo
Rutkowski et al., 2010 (38)Time to treatment, no-show rates, admissions, or continuation in treatment for addiction treatment servicesNIATxPhase I: 6 treatment agencies, 7 change teams; phase II: 8 treatment agencies, 13 change teamsPhase I, 11 months; phase II, 12 monthsYesYesYesYesYesYesNoYesYesYesYesNoNo
Stephan et al., 2011 (39)Mental health service quality and collaborative care in school-based mental health centersNA SBHC19 school-based health centers15 monthsYesYesYesNoYesYesNoYesYesYesUnclearNoYes
Strating et al., 2012 (41)Four distinct collaboratives focused on social psychiatric care, recovery-oriented care, social participation, and somatic comorbidity of psychiatric clientsBTS94 distinct teams of mental health care providers; collaborative 1: 25 teams; collaborative 2: 25 teams; collaborative 3: 26 teams; collaborative 4: 18 teams12 monthsNoNoYesYesYesNoNoYesYesYesNoNoNo
Vannoy et al., 2011 (18)Integration of services between community health centers (CHCs) and community mental health centers (CMHCs) to improve treatment of depression and bipolar disorder in CHCs and improve care of patients at risk of metabolic syndrome in CMHCsBTS15 CHC and CMHC pairs (1 pair dropped out because of staff turnover)12 months; 3 cohortsNoYesYesYesUnclearYesNoYesYesYesNoNoNo
Versteeg et al., 2012 (33)Implementation of multidisciplinary practice guidelines in mental health care organizations (specific domains: anxiety disorders, dual diagnosis, and schizophrenia)BTS19 mental health care organizations; 26 distinct LC teams12 monthsUnclearNoYesNoYesNoYesYesYesYesUnclearNoNo

a Articles are in alphabetical order by author name and grouped by study.

b BTS, Breakthrough Series; CCM, chronic care model; NIATx, Network for the Improvement of Addiction Treatment; NASBHC, National Assembly on School-Based Health Care

c ODC, organizations required to demonstrate commitment; PDSAs, plan-do-study-act cycles; QITs, quality improvement teams

d New QI data, sites collected new data for quality improvement purposes; QI data review, sites reviewed quality improvement data and used feedback

e Involved leaders, involvement or outreach (or both) to organizational leadership

f,g,h Articles that share a footnote are based on data from a single study.

Table 1 Twenty articles representing 16 studies of learning collaboratives (LCs) included in the reviewa
Enlarge table
Table 2 Definitions of domains and components of learning collaboratives (LCs) in the studies revieweda
DomainDefinition
Article informationb
 Target for improvementFocus area for the LC
 ModelLC alignment with existing collaborative models
 Study sampleFocus population
LC componentc
 Length of collaborativeStandard LC length
 Prework: convened expert panelThe Institute for Healthcare Improvement’s Breakthrough Series (BTS) model calls for a planning group that identifies targets for improvement change and plans the collaborative.
 Prework: organizations required to demonstrate commitmentThe BTS model recommends requiring formal commitments, application criteria, or “readiness” activities for LC sites.
 In-person learning sessionsTeams are traditionally trained in clinical approaches and quality improvement (QI) approaches during in-person sessions.
 Plan-do-study-act (PDSA) cyclesPDSA cycles are a key component of the rapid cycle approach to change recommended in QI collaborative models.
 Multidisciplinary QI teamLCs typically involve staff members at various levels of the organization.
 QI team callsGroup phone calls among QI team members or between members in other participating organizations are used as an approach for providing ongoing support.
 E-mail or Web supportE-mail, Listservs, or other forms of Web support used as an approach for providing ongoing support
QI processesd
 Sites collected new data for QIDuring the LC, did sites collect new data for QI purposes?
 Sites reviewed data and used feedbackDid the LC sites review new data, receive feedback, and adjust their practices according to findings?
 External support for data synthesis and feedbackDid LC faculty or other experts provide support with data synthesis and feedback?
Organizational involvemente
 Leadership involvement and outreachDid members of the LC involve or otherwise reach out to organization leadership?
 Training for non–QI team staff members by expertsDid LC faculty or other experts provide training for staff members who were not part of the QI Team?
 Training for non–QI team staff members by the QI teamAfter the LC, did newly trained QI team members provide training for staff members who were not part of the QI team?

a Table adapted from Nadeem et al. (14) with permission

b Basic study details highlighted by the published article

c LC components were compiled through the literature review and explicitly referenced by study authors.

d Beyond the basic components of the LC, which QI techniques were included in the LC?

e In theory, LCs enable an organization to enact change at multiple levels within their organizational structure. Did the LC take steps to train or otherwise involve members of the organization who were not directly included in the collaborative?

Table 2 Definitions of domains and components of learning collaboratives (LCs) in the studies revieweda
Enlarge table

Description of LC components

Ten of the 16 distinct studies were explicitly based on the IHI BTS model, three of which also noted using the chronic care model, a model originally used as part of a joint effort by the IHI and the Robert Wood Johnson Foundation (42). One additional study cited the chronic care model without the BTS model, four studies reported using the NIATx model for process improvement (43), and one study reported using the National Assembly on School-Based Health Care’s (NASBHC) QIC model, which is based on nationally recognized models for quality improvement (39). On average, each of the 16 studies reported implementing seven LC components. The most commonly reported components included in-person learning sessions (16 of 16), multidisciplinary quality improvement teams (12 of 16), PDSAs (12 of 16), and quality improvement team calls (12 of 16). In addition, 11 of the 16 studies reported doing some leadership outreach or engagement. Across articles, there was great variability in the level of detail provided in descriptions of the components of each LC.

Overall LC structure.

The LCs lasted an average of 14 months (range nine to 27 months), with a modal length of 12 months. LCs typically began with an in-person learning session; LC faculty hosted the sessions, and multidisciplinary quality improvement teams attended. Follow-up occurred via additional in-person learning sessions, regular phone meetings for the quality improvement teams, and e-mail or Web-based support. Sites conducted quality improvement projects between quality improvement team calls and in-person learning sessions. All in-person learning sessions and most phone meetings involved multiple sites.

Content of in-person learning sessions.

All studies reported including in-person learning sessions throughout the course of the LC. The most common number of sessions was three (range one to four). In-person sessions were typically two days long, ranging from a half-day to three day-long sessions. One of the studies was an RCT; the four conditions compared were interest circle calls (group teleconference calls), individual site coaching (that is, in-person, telephone, and e-mail support provided to each site), in-person learning sessions, and a combination of all three (34). All studies appear to have included in their sessions some didactic training in a particular care process or specific practice. One study, which focused on care for attention-deficit hyperactivity disorder in primary care clinics, used a combination of shorter in-person sessions (four 90-minute sessions focused on didactic lectures and quality improvement methods) and office visits (2830).

In the National Child Traumatic Stress Network model, all LC participants had already received standard training by the treatment developer in trauma-focused cognitive behavioral therapy before the LC began (7). Participants in a NIATx collaborative took part in a two-day workshop on an evidence-based practice, Seeking Safety (19), in addition to LC activities. Similarly, participants in the NASBHC collaborative learned core components from evidence-based treatment elements for depression, anxiety, disruptive behavior disorders, and substance abuse, along with selected manualized interventions (39). Participants in an LC on engagement strategies received training for agency staff in addition to the standard learning sessions (37).

All of the studies that included descriptions of the in-person sessions also reported that the LC faculty provided training in quality improvement techniques, such as engaging in PDSA cycles or improvement projects. Very few details were provided on the techniques that were taught. In some studies, the LC purveyors had already identified potential areas for improvements that sites should consider for their quality improvement projects (for example, domains in the chronic care model, system improvements, and known implementation barriers) (7,18,31,33). In addition to didactic training related to practices and quality improvement methods, four of the studies reported that individual sites presented information to other participating quality improvement teams during the in-person sessions (7,15,27,37). Few specific details were included about the structure of these cross-site collaborative efforts. Some studies reported having individual site presentations, breakout sessions among “affinity groups,” or the use of “storyboards” (7).

PDSAs.

Twelve studies reported use of PDSAs between in-person sessions during “action periods” (7,1719,2831,3438,41). However, it was largely unclear what occurred during the PDSA cycles, how they were used, or how the ongoing data collection informed the quality improvement process. However, a few studies provided some detail about use of quality improvement methods. In those LCs, the faculty set forth possible improvement areas from which a site could develop its PDSAs or provided hands-on coaching and support (7,17,18,2931,34,37,38). One study did not include PDSAs but instead provided teams with a template to develop “work plans” to facilitate the integration of mental health and primary care in school-based health centers (39).

Quality improvement team calls.

Twelve studies reported that there were calls between in-person sessions for the quality improvement teams (7,1518,27,3139). The calls were typically held monthly with the goal of allowing sites to share progress and solve problems together. Few details were provided on the content or structure of the calls. Two studies reported holding “all collaborative” calls to facilitate sharing and problem solving (7,37). Others described “affinity group” calls targeted toward clinical supervisors, change leaders, or executive leadership or calls focused on specific clinical issues and other special topics (7,38). Studies using the NIATx model also described holding individual site coaching calls focused on the use of process improvement methods (34,38).

E-mail or Web support.

Six studies reported e-mail or Web-based support for the LC participants (7,17,3337). Articles did not provide information about the extent to which LC participants used e-mail Listservs or Web-based support to communicate with other LC participants or LC faculty.

Quality improvement processes.

Eleven studies reported some type of ongoing data collection for the purposes of the LC (for example, performance indicators and ongoing reporting on target outcomes) (7,17,18,2830,3236,38,39,41); eight reported that the LC faculty provided sites with data-based feedback (18,2830,3336,38,39,41). Nine studies reported external support with data collection and feedback (17,18,2830,3336,38,39,41). With a few exceptions (7,30,33,34,38), most articles provided very little information about the data collected, how data were used, or how the data informed quality improvement activities.

Organizational involvement.

Ten studies reported that the organization’s leadership was involved in the LC (7,1517,19,2730,3438). However, it was unclear whether the organizational leadership was included as a part of the quality improvement team or was engaged through other outreach efforts. We also examined indicators of the LCs’ penetration into the broader organization by tracking the training provided to staff who were not members of the quality improvement team, either by LC faculty or by local quality improvement team members themselves. No studies reported providing expert training (conducted by LC faculty or treatment developers) for frontline staff members who were not already on the quality improvement team. Five studies reported that quality improvement team members trained additional staff in the organization (16,17,31,32,39).

Pre-LC activities.

Finally, we tracked “prework” activities, which we defined as planning activities delineated in the original IHI BTS model (8,9). Only five studies reported that the LC used an expert panel during this prework phase—that is, a planning group that identifies targets for improvement and that plans the LC (7,3539). Eight studies reported requiring formal commitments, application criteria, or readiness activities before the start of the LC (7,15,16,18,27,32,35,36,38,39).

Study goals and findings

Study goals.

The primary intent of 19 of the 20 articles was either to explore general feasibility and acceptability of the LC model or to examine pre-post LC changes at the patient and provider level. The only RCT was designed to test various components of the LC to determine which were most related to change (34). In this study, sites were randomly assigned to receive interest circle calls (group teleconference), individual site coaching, in-person learning sessions, or a combination of all three components with the intent of examining which components were related to study outcomes. The study’s use of individual site coaching is somewhat unique. Individual site coaching was described in some studies of the NIATx model (35,36), but most articles did not specify the use of coaching.

Across the studies, ten examined provider-level variables (7,1719,3032,37,39,41), 11 examined patient-level variables (1517,19,28,29,3436,38,41), nine examined acceptability of the LC model to providers (7,1719,31,32,37,39,41), and eight examined sustainability of the changes achieved (7,19,27,30,31,34,36,39). One study examined the relation between LC components and study outcomes in an RCT (34). Three studies examined how elements of the LC process may have contributed to the findings from the LC by exploring issues such as the relation between reported barriers and facilitators (31), social networks (31), and theoretically or empirically derived attitudinal and contextual factors (for example, team effectiveness) (33,40) and changes in outcomes. In addition, two articles provided cost estimates for participation in the collaborative (31,34) (Table 3).

Table 3 Variables examined by 20 articles on learning collaboratives (LCs) included in the reviewa
ArticleProvider-level variablesPatient-level variablesAcceptability of the LC model to providersSustainability of changesRelationship between LC components and LC outcomesRelationship between aspects of implementation and LC outcomesCost estimates
Cavaleri et al., 2006 (15)b
Cavaleri et al., 2007 (27)b
Cavaleri et al., 2010 (16)
Duffy et al., 2008 (32)
Ebert et al., 2012 (7)
Epstein et al., 2008 (28)c
Epstein et al., 2010 (29)c
Epstein et al., 2010 (30)c
Gustafson et al., 2013 (34)
Haine-Schlagel et al., 2013 (37)
Katzelnick et al., 2005 (17)
Hoffman et al., 2008 (36)d
McCarty et al., 2007 (35)d
Meredith et al., 2006 (31)
Roosa et al., 2011 (19)
Rutkowski et al., 2010 (38)
Stephan et al., 2011 (39)
Strating et al., 2012 (41)
Vannoy et al., 2011 (18)
Versteeg et al., 2012 (33)

a Articles are organized by author name and grouped by study.

b,c,d Articles that share a footnote are based on data from a single study.

Table 3 Variables examined by 20 articles on learning collaboratives (LCs) included in the reviewa
Enlarge table

Study findings.

There was wide variability in study designs and methods, quality of the methodology, and methodological details provided in the articles. Moreover, with the exception of one RCT (34), the strength of the outcomes was difficult to judge across studies because of the lack of control groups and the variability in the reporting of the LC elements. Therefore, we were unable to draw conclusions about the overall effectiveness of the LC within the mental health context.

However, the study by Gustafson and colleagues (34) suggested that certain LC elements may be more potent in predicting patient outcomes. Specifically, the authors found that waiting times declined for clinics in the individual site coaching, in-person learning sessions, and a combination of three LC components (in-person learning sessions, individual site coaching, and group calls). They also found that the number of new patients increased for the combination and coaching-only groups and that interest circle group teleconferences had no impact on outcomes. Although individual site coaching and the combination intervention were considered to be similarly effective, individual site coaching was more cost-effective over the 18-month study period ($2,878 per clinic versus $7,930) (34).

Of the 19 other articles that were not RCTs, most reported positive findings with respect to patient, provider, or sustainability variables. Each of the ten articles that reported on provider-level variables reported positive trends from pre- to post-LC, suggesting improvements in areas such as process of care and uptake of new practices (7,1719,3032,37,39,41). Similarly, although there were some mixed findings, each of the 11 articles that reported on patient-level variables reported positive pre- to post-LC changes in areas such as symptoms and engagement in services (1517,19,28,29,3436,38,41). Six of the eight articles that reported on sustainability reported sustained use of new practices or procedures after the conclusion of the LC (7,27,30,31,36,39). In addition, the LC model was reported to be feasible and acceptable to providers in each of the nine articles that assessed these variables (7,1719,31,32,37,39,41).

Discussion

The use of LCs in the mental health context is an important area for research as policy makers seek to scale up evidence-based practices and improve the quality of care. LCs are being widely used as an attractive alternative to traditional developer training models because they hold promise for achieving sustained change in a way that typical treatment developer trainings may not (7,4446). LCs can help sites build local capacity and address organization- and provider-level implementation barriers (44,47,48). They have the potential to foster local ownership of the implementation process, promote transparency and accountability, create a culture of continuous learning, provide an infrastructure for addressing barriers, and cultivate support networks (7,44).

The major challenge for the mental health field is the lack of rigorous studies of LCs. In our previous review, we found 20 studies of LCs in other areas of health care that used comparison groups (14), but only one study in mental health care was an RCT (34). In the review reported here, we identified 20 articles that reported data on LC outcomes. Although we can be encouraged by the positive trends reported in these studies with respect to provider, patient, and sustainability outcomes, the findings must be interpreted with caution given the lack of comparison data. In addition, because of the variability in methods and rigor used in these studies, it was not possible to come to any broad conclusions about the effects of LCs on provider- or patient-level outcomes.

It is critical that future research on LCs include more studies with comparison conditions—ideally with randomized designs that can examine the impact of different implementation strategies. There are a number of quality improvement approaches to implementation of new practices that could be tested against LCs. Evidence has been found for several approaches in terms of improving the quality of care: audit and feedback methods from health care (49); individual site–focused quality improvement initiatives that involve training of local quality improvement teams, leadership support, coaching, and audit and feedback (50,51); and the availability, responsiveness, and continuity model, an organizational-level quality improvement intervention (52). In addition, a review of Six Sigma and Lean continuous improvement approaches borrowed from industry and applied in health care suggest that these are promising strategies that could be further tested (53). Of particular importance are studies such as the one conducted by Gustafson and colleagues (34) that can identify which structural and theoretical components of LCs contribute to favorable outcomes.

Recent studies provide insights into active components that could be directly tested. These include cross-site and local learning activities (for example, staff education, PDSAs, and team effectiveness) (31,41,48,54,55), local leadership support, sites’ ability to address common implementation barriers, expert support, ongoing data collection, and the visibility of local changes achieved through quality improvement methods (3,33,48,5659). In addition, there is a great need to continue to examine the costs associated with LCs and the incremental cost-benefit of using this approach, compared with traditional developer trainings and other quality improvement methods. This type of information is critical for decision makers because LCs can be costly. One study of an LC for depression care reported that the average cost of participation was more than $100,000 per site (31). Another study suggested that the added cost of in-person learning sessions may not bring much incremental cost-benefit with respect to patient outcomes, compared with individual site coaching (34).

With respect to the reporting of LC components, we found patterns similar to those found in previous research. Prior reviews have highlighted the variation in implementation of the LC model and inconsistent reporting of components (4,13,14,25). Across studies, the LCs in this review had a similar structure. However, insufficient detail was provided with respect to the presence of LC components and how they were implemented in most studies. Moreover, because the original QIC models in health care were based on management theory (1012), the lack of specificity on how process improvement was conducted, how quality improvement data were collected, and how data were used is striking. It is essential to carefully describe how quality improvement methods are being used in mental health care because previous studies have suggested that LC participants perceive instruction in quality improvement methods to be useful (31,48,59) and because the innovations implemented in mental health are often complex evidence-based treatments that may require adaptations from the original QIC models in health care. This review provides one potential template for the reporting of specific LC components, each of which should be reported in sufficient detail that others could replicate the activities and processes (that is, “dosage” provided, engagement of participants, details on how quality improvement was taught, how data were used, and how teams and leadership were engaged). In addition, it will be important for future research to report on and explore theoretically driven active ingredients of LC by examining not only structure but also LC processes.

Some limitations should be considered in interpreting these findings. As with any systematic review, it is possible that relevant studies were omitted. By searching multiple databases, reviewing the reference lists of key articles, and cross-checking with free-text search terms, we minimized the possibility of such omissions. In addition, negative findings are generally not published, potentially biasing our results. Despite these potential limitations, our review provides an important assessment of the state of the evidence for use of LCs in mental health care. Uses of LCs that focus on processes of care (for example, implementation of engagement practices and depression guidelines) align more closely with the targets of collaboratives that have been applied in other areas of health care. The applicability of LCs for disseminating and implementing more complex mental health evidence-based practices remains unknown; in the mental health field, such efforts often require additional specialized trainings to develop provider skills in implementing these evidence-based practices. The cost-effectiveness or added value of such an approach must thus be carefully assessed.

Conclusions

As LCs continue to grow in popularity among policy makers and national organizations, there is great need for rigorous research that evaluates the utility of these costly endeavors. Moreover, research focused on active components of LCs is vital to the replication of successful LCs, ensuring quality and fidelity to the model, guiding future adaptations, and identifying the types of innovations and improvements for which the model is most appropriate.

Except for Ms. Hill , the authors are with the Department of Child and Adolescent Psychiatry, New York University, New York City (e-mail: ). Ms. Hill is with the School of General Studies, Columbia University, New York City.

Acknowledgments and disclosures

Writing of this article was supported by grants from the National Institutes of Health: K01MH083694 to Dr. Nadeem and P30 MH090322 to Dr. Hoagwood.

The authors report no competing interests.

References

1 Honberg R, Kimball A, Diehl S, et al.: State Mental Health Cuts: The Continuing Crisis. Alexandria, Va, National Alliance on Mental Illness, 2011Google Scholar

2 SAMHSA is accepting applications for up to $30 million in State Adolescent Enhancement and Dissemination grants. News release. Rockville, Md, Substance Abuse and Mental Health Services Administration, June 12, 2012. Available at www.samhsa.gov/newsroom/advisories/1206124742.aspxGoogle Scholar

3 Ayers LR, Beyea SC, Godfrey MM, et al.: Quality improvement learning collaboratives. Quality Management in Health Care 14:234–247, 2005Crossref, MedlineGoogle Scholar

4 Mittman BS: Creating the evidence base for quality improvement collaboratives. Annals of Internal Medicine 140:897–901, 2004Crossref, MedlineGoogle Scholar

5 ØVretveit J, Bate P, Cleary P, et al.: Quality collaboratives: lessons from research. Quality and Safety in Health Care 11:345–351, 2002Crossref, MedlineGoogle Scholar

6 Becker DR, Drake RE, Bond GR, et al.: A national mental health learning collaborative on supported employment. Psychiatric Services 62:704–706, 2011LinkGoogle Scholar

7 Ebert L, Amaya-Jackson L, Markiewicz J, et al.: Use of the Breakthrough Series Collaborative to support broad and sustained use of evidence-based trauma treatment for children in community practice settings. Administration and Policy in Mental Health and Mental Health Services Research 39:187–199, 2012Crossref, MedlineGoogle Scholar

8 The Breakthrough Series: IHI’s Collaborative Model for Achieving Breakthrough Improvement. IHI Innovation Series White Paper. Cambridge, Mass, Institute for Healthcare Improvement, 2003Google Scholar

9 Kilo CM: A framework for collaborative improvement: lessons from the Institute for Healthcare Improvement’s Breakthrough Series. Quality Management in Health Care 6:1–13, 1998Crossref, MedlineGoogle Scholar

10 Deming WE: Out of the Crisis. Cambridge, Mass, MIT Press, 1986Google Scholar

11 Juran JM: Quality Control Handbook. New York, McGraw-Hill, 1951Google Scholar

12 Juran JM: Managerial Breakthrough. New York, McGraw-Hill, 1964Google Scholar

13 Schouten LMT, Hulscher MEJL, van Everdingen JJ, et al.: Evidence for the impact of quality improvement collaboratives: systematic review. British Medical Journal 336:1491–1494, 2008Crossref, MedlineGoogle Scholar

14 Nadeem E, Olin SS, Hill LC, et al.: Understanding the components of quality improvement collaboratives: a systematic literature review. Milbank Quarterly 91:354–394, 2013Crossref, MedlineGoogle Scholar

15 Cavaleri MA, Gopalan G, McKay MM, et al.: Impact of a learning collaborative to improve child mental health service use among low-income urban youth and families. Best Practices in Mental Health 2:67–79, 2006Google Scholar

16 Cavaleri MA, Gopalan G, McKay MM, et al.: The effect of a learning collaborative to improve engagement in child mental health services. Children and Youth Services Review 32:281–285, 2010CrossrefGoogle Scholar

17 Katzelnick DJ, Von Korff M, Chung H, et al.: Applying depression-specific change concepts in a collaborative breakthrough series. Joint Commission Journal on Quality and Patient Safety 31:386–397, 2005Crossref, MedlineGoogle Scholar

18 Vannoy SD, Mauer B, Kern J, et al.: A learning collaborative of CMHCs and CHCs to support integration of behavioral health and general medical care. Psychiatric Services 62:753–758, 2011LinkGoogle Scholar

19 Roosa M, Scripa JS, Zastowny TR, et al.: Using a NIATx based local learning collaborative for performance improvement. Evaluation and Program Planning 34:390–398, 2011Crossref, MedlineGoogle Scholar

20 Evans AC, Rieckmann T, Fitzgerald MM, et al.: Teaching the NIATx model of process improvement as an evidence-based process. Journal of Teaching in the Addictions 6:21–37, 2008CrossrefGoogle Scholar

21 Kilo CM: Improving care through collaboration. Pediatrics 103(suppl E):384–393, 1999MedlineGoogle Scholar

22 Plsek PE: Collaborating across organizational boundaries to improve the quality of care. American Journal of Infection Control 25:85–95, 1997Crossref, MedlineGoogle Scholar

23 Berwick DM: Continuous improvement as an ideal in health care. New England Journal of Medicine 320:53–56, 1989Crossref, MedlineGoogle Scholar

24 Laffel G, Blumenthal D: The case for using industrial quality management science in health care organizations. JAMA 262:2869–2873, 1989Crossref, MedlineGoogle Scholar

25 Solberg LI: If you’ve seen one quality improvement collaborative. Annals of Family Medicine 3:198–199, 2005Crossref, MedlineGoogle Scholar

26 Wilson T, Berwick DM, Cleary P: What do collaborative improvement projects do? Experience from seven countries. Joint Commission Journal on Quality and Patient Safety 30:25–33, 2004CrossrefGoogle Scholar

27 Cavaleri MA, Franco LM, McKay MM, et al.: The sustainability of a learning collaborative to improve mental health service use among low-income urban youth and families. Best Practices in Mental Health 3:52–61, 2007Google Scholar

28 Epstein JN, Langberg JM, Lichtenstein PK, et al.: Community-wide intervention to improve the attention-deficit/hyperactivity disorder assessment and treatment practices of community physicians. Pediatrics 122:19–27, 2008Crossref, MedlineGoogle Scholar

29 Epstein JN, Langberg JM, Lichtenstein PK, et al.: Attention-deficit/hyperactivity disorder outcomes for children treated in community-based pediatric settings. Archives of Pediatrics and Adolescent Medicine 164:160–165, 2010Crossref, MedlineGoogle Scholar

30 Epstein JN, Langberg JM, Lichtenstein PK, et al.: Sustained improvement in pediatricians' ADHD practice behaviors in the context of a community-based quality improvement initiative. Children's Health Care 39:296–311, 2010CrossrefGoogle Scholar

31 Meredith LS, Mendel P, Pearson M, et al.: Implementation and maintenance of quality improvement for treating depression in primary care. Psychiatric Services 57:48–55, 2006LinkGoogle Scholar

32 Duffy FF, Chung H, Trivedi M, et al.: Systematic use of patient-rated depression severity monitoring: is it helpful and feasible in clinical psychiatry? Psychiatric Services 59:1148–1154, 2008LinkGoogle Scholar

33 Versteeg MH, Laurant MG, Franx GC, et al.: Factors associated with the impact of quality improvement collaboratives in mental healthcare: an exploratory study. Implementation Science 7:1–11, 2012Crossref, MedlineGoogle Scholar

34 Gustafson DH, Quanbeck AR, Robinson JM, et al.: Which elements of improvement collaboratives are most effective? A cluster-randomized trial. Addiction 108:1145–1157, 2013Crossref, MedlineGoogle Scholar

35 McCarty D, Gustafson DH, Wisdom JP, et al.: The Network for the Improvement of Addiction Treatment (NIATx): enhancing access and retention. Drug and Alcohol Dependence 88:138–145, 2007Crossref, MedlineGoogle Scholar

36 Hoffman KA, Ford JH, Choi D, et al.: Replication and sustainability of improved access and retention within the Network for the Improvement of Addiction Treatment. Drug and Alcohol Dependence 98:63–69, 2008Crossref, MedlineGoogle Scholar

37 Haine-Schlagel R, Brookman-Frazee L, Janis B, et al.: Evaluating a Learning Collaborative to implement evidence-informed engagement strategies in community-based services for young children. Child and Youth Care Forum 42:457–473, 2013CrossrefGoogle Scholar

38 Rutkowski BA, Gallon S, Rawson RA, et al.: Improving client engagement and retention in treatment: the Los Angeles County experience. Journal of Substance Abuse Treatment 39:78–86, 2010Crossref, MedlineGoogle Scholar

39 Stephan S, Mulloy M, Brey L: Improving collaborative mental health care by school-based primary care and mental health providers. School Mental Health 3:70–80, 2011CrossrefGoogle Scholar

40 Strating MMH, Nieboer AP: Norms for creativity and implementation in healthcare teams: testing the group innovation inventory. International Journal for Quality in Health Care 22:275–282, 2010Crossref, MedlineGoogle Scholar

41 Strating MMH, Broer T, van Rooijen S, et al.: Quality improvement in long-term mental health: results from four collaboratives. Journal of Psychiatric and Mental Health Nursing 19:379–388, 2012Crossref, MedlineGoogle Scholar

42 Cretin S, Shortell SM, Keeler EB: An evaluation of collaborative interventions to improve chronic illness care: framework and study design. Evaluation Review 28:28–51, 2004Crossref, MedlineGoogle Scholar

43 NIATx: The NIATx Model. Madison, University of Wisconsin–Madison, 2013. Available at www.niatx.net/Content/ContentPage.aspx?PNID=1&NID=8Google Scholar

44 Bero LA, Grilli R, Grimshaw JM, et al.: Closing the gap between research and practice: an overview of systematic reviews of interventions to promote the implementation of research findings. British Medical Journal 317:465–468, 1998Crossref, MedlineGoogle Scholar

45 Stirman SW, Crits-Christoph P, DeRubeis RJ: Achieving successful dissemination of empirically supported psychotherapies: a synthesis of dissemination theory. Clinical Psychology: Science and Practice 11:343–359, 2004CrossrefGoogle Scholar

46 McHugh RK, Barlow DH: The dissemination and implementation of evidence-based psychological treatments: a review of current efforts. American Psychologist 65:73–84, 2010Crossref, MedlineGoogle Scholar

47 Feldstein AC, Glasgow REA: A practical, robust implementation and sustainability model (PRISM) for integrating research findings into practice. Joint Commission Journal on Quality and Patient Safety 34:228–243, 2008Crossref, MedlineGoogle Scholar

48 Nembhard IM: Learning and improving in quality improvement collaboratives: which collaborative features do participants value most? Health Services Research 44:359–378, 2009Crossref, MedlineGoogle Scholar

49 Jamtvedt G, Young JM, Kristoffersen DT, et al.: Does telling people what they have been doing change what they do? A systematic review of the effects of audit and feedback. Quality and Safety in Health Care 15:433–436, 2006Crossref, MedlineGoogle Scholar

50 Wells KB, Sherbourne C, Schoenbaum M, et al.: Impact of disseminating quality improvement programs for depression in managed primary care: a randomized controlled trial. JAMA 283:212–220, 2000Crossref, MedlineGoogle Scholar

51 Asarnow JR, Jaycox LH, Duan N, et al.: Effectiveness of a quality improvement intervention for adolescent depression in primary care clinics: a randomized controlled trial. JAMA 293:311–319, 2005Crossref, MedlineGoogle Scholar

52 Glisson C, Hemmelgarn A, Green P, et al.: Randomized trial of the Availability, Responsiveness, and Continuity (ARC) organizational intervention with community-based mental health programs and clinicians serving youth. Journal of the American Academy of Child and Adolescent Psychiatry 51:780–787, 2012Crossref, MedlineGoogle Scholar

53 Vest JR, Gamm LD: A critical review of the research literature on Six Sigma, Lean and StuderGroup’s Hardwiring Excellence in the United States: the need to demonstrate and communicate the effectiveness of transformation strategies in healthcare. Implementation Science 4:35, 2009Crossref, MedlineGoogle Scholar

54 Nembhard IM, Tucker AL: Deliberate learning to improve performance in dynamic service settings: evidence from hospital intensive care units. Organization Science 22:907–922, 2011CrossrefGoogle Scholar

55 Nembhard IM: All teach, all learn, all improve? The role of interorganizational learning in quality improvement collaboratives. Health Care Management Review 37:154–164, 2012Crossref, MedlineGoogle Scholar

56 Brandrud AS, Schreiner A, Hjortdahl P, et al.: Three success factors for continual improvement in healthcare: an analysis of the reports of improvement team members. BMJ Quality and Safety 20:251–259, 2011Crossref, MedlineGoogle Scholar

57 Dückers ML, Spreeuwenberg P, Wagner C, et al.: Exploring the black box of quality improvement collaboratives: modelling relations between conditions, applied changes and outcomes. Implementation Science 4:74–85, 2009Crossref, MedlineGoogle Scholar

58 Pinto A, Benn J, Burnett S, et al.: Predictors of the perceived impact of a patient safety collaborative: an exploratory study. International Journal for Quality in Health Care 23:173–181, 2011Crossref, MedlineGoogle Scholar

59 Shortell SM, Marsteller JA, Lin M, et al.: The role of perceived team effectiveness in improving chronic illness care. Medical Care 42:1040–1048, 2004Crossref, MedlineGoogle Scholar