The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
Evidence-Based PracticesFull Access

Organizational Process: A Missing Link Between Research and Practice

Abstract

Organizational process is an underexamined barrier and a potential bridge for the introduction of innovative treatment models into mental health practice. The author describes key operational characteristics of large, complex organizations and strategies that have been used to facilitate implementation of innovative programs in the Department of Veterans Affairs health care system. He argues that complex organizations of the type in which mental health care is increasingly delivered are characterized by multiple competing goals, uncertain technologies, and fluid involvement of key participants. Interventions shown to be effective in controlled studies are often not easily introduced into such organizations, because research is typically conducted in a buffered organizational niche that is shielded from the complex open systems around it. Key strategies for moving research into practice include constructing decision-making coalitions, linking new initiatives to legitimated goals and values, quantitatively monitoring implementation and ongoing performance, and developing self-sustaining communities of practice as well as learning organizations. The author shows how effective dissemination of new treatment methods requires attention to and effective engagement with organizational processes.

The goal of medical research is to develop and test treatments that can be used widely to improve public health. However, recent evidence suggests that services provided for mental illnesses such as depression, anxiety, and schizophrenia often fail to conform to best-practice standards (1,2).

A recent report, Bridging Science and Service (3), presented 49 recommendations for increasing the usefulness of research conducted by the National Institute of Mental Health, expanding the portfolio of studies in real-world settings and identifying methodological innovations to facilitate research on the translation of new information into practice. This approach to closing the research-to-practice gap is based on the assumption that research conducted under real-world conditions is more relevant to practitioners and program managers and therefore more frequently applied to practice.

In this review, I outline a complementary perspective on the research-to-practice gap that identifies organizational process as a largely unaddressed barrier and as a potential bridge between research and practice. From this perspective, research with more generalizable results, by itself, will play a limited role in furthering the passage of research findings into practice. Drawing on a half century of work by organizational researchers (4,5), I suggest that although mental health providers, like others who work in complex organizations, are highly respectful of scientific evidence in principle, their daily decision making is shaped more by power structures, ingrained routines, and established resource configurations than by current scientific findings. The path from research to practice is thus likely to run through several challenging demands, including coalition decision making; incremental efforts that link new approaches to previously legitimated policies, values, and traditions; implementation of quantitative performance monitors; and establishment of clinical subcultures rooted in reinforcing communities of practice.

I review basic principles of organizational theory as they pertain to large, complex organizations; contrast the organizational contexts of practice and research; outline an agenda for dissemination research; and examine organizational processes that support clinical innovation.

The observations presented here are largely descriptive and have not yet been subjected to systematic empirical research. Thus I seek to draw attention to phenomena that are familiar to experienced administrators, managers, and organizational theorists but that have not been studied by mental health services researchers.

Bounded rationality in complex organizations

The organizational approach on which I draw derives ultimately from the work of Herbert Simon (4,5), who observed that the complexity of most organized human activity and the uncertainty associated with most technologies make human rationality a "bounded rationality" that is more likely to be guided by estimates, approximations, shared values, habits, and group identifications than by formal evaluation of comprehensive sets of alternative action plans. Even in highly technological organizations, such as the National Aeronautics and Space Administration, studies have shown that decision making is dominated by standard operating procedures and behavioral norms (6). Although there may be value in developing more generalizable scientific evidence, such research is not, in itself, likely to facilitate the translation of research into practice.

Organizational context of mental health practice

Although the fractious environment of large organizations is a common source of frustration and demoralization for professionals (7), there has been little discussion in the mental health literature in recent years of theories that seek to account for the distinctive features of such organizations. In one often-cited conceptualization (8), large human service or educational organizations are described as "organized anarchies" or "adhocracies" characterized by multiple and often conflicting goals, unclear and uncertain technologies for realizing those goals, and fluid participation and inconsistent attentiveness of principal actors.

Multiple goals

Mental health organizations, like other health care delivery systems, seek to achieve a particularly broad range of goals, including improving the health of patients, shoring up patients' income streams, fostering staff development and satisfaction, meeting external regulatory and accreditation requirements, winning the support of government and community stakeholders, protecting the public from dangerous behavior, and supporting education and research.

In addition to these competing tasks, staff groups have cross-cutting interests related to their professional affiliations and identities. Although diverse organizational goals and subgroup interests are not intrinsically incompatible with one another, task groups invariably compete for the same limited pools of material resources and managerial attention. Thus, although complex organizations benefit from a division of labor that allows the development of specialized skills, participants are often more invested in the achievement of local subgoals than in the larger organizational objectives. Agreement on priorities is thus difficult to achieve and is often bitterly contested.

Uncertain technologies

In mental health, as in other areas of health care, technologies are only partly effective. As a result, their legitimacy is based on a mixture of scientific evidence, ideological conviction, and subjective professional opinion. Although outcome studies convincingly demonstrate the benefits of certain treatments, on average, for large groups of patients, the impact of these studies on particular individuals is not unambiguously discernible at the clinical level. For example, a large randomized clinical trial of clozapine (9) clearly showed the benefits of this new drug; however, further analysis indicated that taking the drug explained only 4 percent of the variance in symptom outcomes for individual patients and only 7 percent of the variance in side effects. Although rigorous studies clearly demonstrate the potential effectiveness of many mental health treatments, even highly respected treatments often have effect sizes of modest magnitude (10,11). Unlike the products of manufacturing industries (12), most medical treatments, including mental health treatments, have variable and often ambiguous impacts on individuals and therefore leave ample room for debate about which practices are best.

Fluid participation and attention

The achievement of diverse goals through uncertain technologies by highly differentiated groups of autonomy-seeking professionals would seem to require close coordination by knowledgeable and attentive leaders. However, governance of complex organizations is often characterized by leadership change and uncertainty. Leadership turnover is substantial in many settings, and in large organizations like the Veterans Affairs health care system, clinical staff may be directed by up to nine hierarchical levels, each with its own priorities and determination to pursue them. Leaders typically do not have enough time to devote their full attention to even a fraction of the issues for which they are responsible. Managerial attention has been described as the most limited resource in large organizations (13).

In addition to these internal dynamics, externally driven factors, including real and threatened resource shortages, changes in administrative rules and in certification procedures, impose further demands and distractions on key actors. Conflict, change, and stress are thus ubiquitous in large organizations, and problems are often not so much solved as superseded by other problems.

Contrasting contexts of research and practice

It is in this field of competition, ambiguity, and fluid managerial attention that efforts to import research findings into practice take place. In contrast with the swirling life of complex organizations, the environment in which research interventions are introduced is generally characterized by clear and well-defined goals—to test study hypotheses; explicitly defined technologies—standard research designs and interventions; and in most cases, unflagging leadership from the principal investigator, at least over a circumscribed period. The clinical research protocol, sanctified by the lengthy review process of funding agencies and human investigations committees, is an island of protected order in a relatively stormy sea of organizational process.

In a seminal contribution to organizational theory, James Thompson (14) observed that although much of organizational life involves the clash of established routines and unanticipated environmental contingencies, certain core functions are protected or buffered from uncertainty and turmoil. Whether they are central to the organization's survival or are privileged for some other reason, specialized resources are used to insulate such activities from the surrounding environment. Thus, although an organization taken as a whole may operate as an open system reacting to a panoply of internal and external contingencies, selected subsystems may be cordoned off and shielded from forces that might divert them from achieving their purposes.

Clinical and even health services research studies appear to be protected in just this way, in part because they are time limited and thus can practically be granted temporary immunity from organizational flux and in part because they enjoy high prestige as the source of past and future legitimacy for the entire health care enterprise. For example, although program managers frequently experience midstream changes in mission, resources, or leadership, such upheavals are far less common in research studies once they are launched. Treatment offered within a research study is thus quite different from treatment offered in the turbulent environment in which it is eventually to be applied. The research-to-practice gap thus reflects, in part, the gap between the protected environment in which research takes place and the complex, heavily contested field in which the results are eventually applied.

For example, even though studies of academic detailing take place in natural clinical settings, they are limited by the fact that they address a small number of best practices that are carefully selected by the researchers (15). In more open organizational systems, there is often considerable debate about which of the many best practices should be selected for detailing and considerable resistance among those whose favored choices are not selected for emphasis.

Because the central methodological challenge of research is to isolate the effect of discrete factors on health, with everything else held as constant as possible, it is natural for researchers to expect that once such factors are identified they will be deployed, one by one, to generate improvements in public health. However, in moving from research to practice, findings must enter a world of multiply determined processes that is shaped by demands of external stakeholders and by the interests of internal actors, all of which must be addressed simultaneously.

The dissemination process in complex organizations

In recent years it has been increasingly recognized that the evaluation of new treatments is a sequential process. The value of a treatment must first be demonstrated through efficacy research, in which treatments are tested in highly controlled settings. Such studies are followed by effectiveness research, in which treatments are evaluated in settings that more closely approximate real-world practice. Effectiveness studies are, in turn, evaluated by translational research that demonstrates whether and how research findings can be introduced into real-world practice (15).

The central place of organizational process in moving research findings into practice suggests that there is a need for a fourth kind of research, which I call dissemination process research. Dissemination research, as proposed here, is descriptive rather than quantitative, at least at this stage, and seeks to draw attention to processes of organizational life that must be addressed in traversing the road from research to practice. Such processes are rarely acknowledged in empirical studies, although their salience has been recognized by some (7).

For example, an innovative empirical study of the implementation of family psychoeducation at more than 50 sites in Maine and Illinois found that over and above the influence of practitioners' attitudes, educational interventions, and funding, the most powerful predictor of successful implementation was the state in which the program was located (16). Implementation was far more successful in Maine than in Illinois, a finding that could not be explained by the available quantitative analyses but that probably reflected the fact, evident in the paper but not explicitly acknowledged, that one of the leading proponents of this approach and the first author of the paper has lived, worked, and apparently been constructing and educating coalitions of advocates and providers in his home state for many years.

As a framework for further describing the kinds of issues that shape the dissemination process, I identify four strategies that have been observed to be important in promoting the transition from research to practice. The first is the construction of leadership coalitions that favor implementation and that can provide ongoing support. The second strategy is linking initiatives to legitimated organizational goals and values. The third is the quantitative monitoring of fidelity to the model and ongoing program performance. The fourth strategy is the development of self-sustaining subcultures or communities of practice that both perpetuate and modify program procedures and values.

Over the past 15 years, the Northeast Program Evaluation Center, an arm of the Mental Health Strategic Healthcare Group of the Department of Veterans Affairs (VA), has participated in the development and dissemination of more than 900 VA programs for severely mentally ill veterans (17), for homeless veterans with mental illness (18), and for veterans with war-related posttraumatic stress disorder (PTSD) (19). In defining issues that are encountered in the dissemination process, I draw on relevant experiences with this broad range of programs.

Decision-making coalitions

Innovation begins with a decision by an individual or small group to adopt or disseminate a new treatment. Because in most situations no single proponent of an innovation can effect its adoption alone, even if the person has formal authority to do so, the first step in this process is a political one in which a coalition of advocates is assembled to argue, through both formal and informal channels, for a change in treatment process or for the development of a new program. Although discussions often refer generally to scientific findings and rely on legitimizing summaries, position papers, and statements of expert opinion, such discussions are not themselves scientific reviews. The eventual outcome of such efforts depends as much on the strength of the coalition, the resources it commands, and its persuasiveness as on the quality of available scientific evidence.

The organizational level of this decision is also important. If the impetus for innovation comes from the higher reaches of an organization, it has the potential to have a widespread impact. However, because many people in the organization must support it, consensus is more difficult to achieve, and implementation is less likely. If the impetus for implementation comes from lower down in the organization—closer to the grass roots—it is more likely to succeed, because fewer stakeholders need to concur, but the impact is likely to be limited and locally restricted. This essential trade-off between scope of dissemination and likelihood of success appears to be an iron rule of hierarchy.

Consistency with organizational objectives

One of the principal factors influencing the decision to implement a new treatment model is its relationship to larger organizational objectives. Implementation is facilitated if the intervention can be linked to broad organizational agendas that have taken-for-granted legitimacy, such as "excellence in health care value" and "health care second to none," and if it can be linked to narrower legitimizing agendas, such as relying on evidence-based medicine or addressing the problems of highly publicized target populations such as homeless veterans or Vietnam veterans. Legitimization of the intervention as "cutting edge," "state-of-the-art," or "standard" treatment by external experts, stakeholders, and published reports can also provide important credibility. This type of credibility is typically more relevant to decision making than scientific evidence itself.

For example, a study of the inpatient treatment of PTSD in the VA system showed that traditional long-stay programs had no greater clinical effect than other programs but cost $18,000 more per patient (20). Although this was the first comparative study of these programs, and most initial studies have little impact on practice until they have been replicated many times, this report appears to have stimulated widespread change in VA practice within only a few years. In 1996, when the internal VA report on this study was circulated, there were 25 traditional programs and 21 alternative programs—halfway houses, day hospitals, and short-term units. In 2000, however, there were only seven traditional programs and 34 new model programs. The reason for this rapid change is that the conclusions of the study were consistent with emerging organizational goals to reduce inpatient utilization and to shift the emphasis of care to community-based services.

Quantitative implementation assessment

Ensuring successful implementation of innovative treatments is perhaps the area of dissemination practice that has been most fully addressed by traditional researchers. Recent research has increasingly emphasized the ways in which novel treatments can be provided through written manuals (21) and the verification of implementation through the use of formal assessment tools that quantify fidelity to the model (22).

The increased availability and reduced cost of high-speed computers has made these research tools available for use in naturalistic program evaluation, allowing empirical documentation of the characteristics of clients served, the services that are delivered, and, in some cases, the outcomes and costs of treatment (17,18,19). Use of standardized evaluation instruments communicates concretely and directly to the clinical staff who use them the kinds of clients they are expected to serve, the services to be delivered, and the desired outcomes. For example, VA's homeless veteran programs have used such standardized evaluation procedures for more than 14 years; the programs now collect annual accountability data on more than 30,000 persons at more than 150 sites nationwide (23,24). The first evaluation report on homeless veterans' programs was submitted to Congress less than six months after the first client was seen. The report was also circulated to the sponsoring institutions and to the new programs themselves. It clearly defined the target population, the intended service models, and the targeted outcome domains. In our experience, such formalization helps buffer interventions from local processes that might divert them from their intended task.

Compilation and circulation of evaluation results are essential parts of this process, because they establish an accountability loop through which local performance can be compared with broader program standards and through which deviation from those standards can be addressed. Performance data from all specialized mental health programs monitored by the Northeast Program Evaluation Center are circulated, and those whose performance deviates from program norms are asked to explain the reasons for the deviations in writing and to either justify the deviations or identify a plan for remediating them (23,24). These feedback plans are included in annual program reports, which are circulated to all program directors, to local and national VA administrators, and to Congress.

The availability of performance data can also aid in the development of supportive coalitions. Empirical outcome data from the Center for Mental Health Services' Access to Community Care and Effective Services and Supports (ACCESS) program for homeless people with mental illness were invaluable in supporting the efforts of coalitions that pushed to keep the clinical programs operating after federal research funding had ended (25).

Subculture development

In the long term, the development of a self-reinforcing, program-specific subculture is perhaps even more important than monitoring and enforcement of program standards. As experiences and challenges are shared, a community of practice develops on the basis of "the patterned social interaction between members that sustains organizational knowledge and facilitates its reproduction" (26). The key to developing such a community of practice is frequent interaction. Such interaction allows members to make sense of their common experience and to codify their jointly accrued knowledge in catchphrases, symbols, and stories (27).

For example, the staff of VA's intensive case management programs meet daily to review cases and procedures, and the staff of both specialized PTSD programs and homeless outreach programs are expected to share common office space and to have integrated leadership and regular meetings. Programs at different facilities have been knit together over the years by a series of training conferences, weekly conference calls during the start-up periods, and monthly calls thereafter. These activities have been maintained for well over a decade. Joint participation in national evaluation and monitoring efforts also contributes to program cohesiveness. Data reflecting the performance of each program as well as national trends are circulated and discussed, along with recent research findings of direct practical relevance.

A learning organization

An evolving community—or communities—of practice may eventually generate treatment activities that modify, reconfigure, or even replace previously disseminated program elements. As described above, a broad array of residential and outpatient programs began to replace the traditional long-term VA inpatient PTSD programs in the early 1990s (19). As originally disseminated, residential and outpatient programs were defined as separate initiatives with separate evaluation paradigms and administrative structures. However, after a few years, sites began to integrate their inpatient and outpatient PTSD programs, using creative mixtures of staff assignments, to offer a fuller continuum of care with an integrated staff and coordinated leadership. This development emerged from the logic of field-based experience, was reinforced by shared experience, and spread through what had become a "learning community" of VA PTSD programs (28,29). With less and less shaping from central staff, program guidance comes increasingly from the teams themselves.

As programs claim increasing ownership of their efforts, the distinction between erosion of performance standards and the development of creative, experience-based innovation becomes difficult to make and may result in considerable ambiguity in the final evaluation of program performance (30). Attention to these difficult issues will become more important as evidence-based practice becomes more widespread and as innovative models, deployed for extended periods, are increasingly subject to local modification.

Conclusions

I have described features of organizational life that can both impede and promote the adoption of evidence-based practices. The phenomena described here predominantly reflect experiences in the VA health care system, a large federal bureaucracy, over the past 15 years. However, such processes are likely to be characteristic of other large—and not so large—organizations, and they deserve the attention of health system managers as they strive to weave innovative treatments into the complex fabric of organizational life.

Acknowledgments

The author thanks Paul Errera, M.D., Thomas Horvath, M.D., Laurent Lehmann, M.D., Gay Koerber, M.A., and William Van Stone, M.D., who provided national leadership for the VA mental health programs described here. He also thanks the project directors from the National Program Evaluation Center who participated in these projects: Rani Desai, Ph.D., Alan Fontana, Ph.D., Linda Frisman, Ph.D., Peggy Gallup, Ph.D., Wes Kasprow, Ph.D., Alvin Mares, Ph.D., Michael Neale, Ph.D., and Catherine Seibyl, M.S.N., M.P.H. Gregory Greenberg, Ph.D., made helpful comments on an earlier draft.

Dr. Rosenheck is director of the Veterans Affairs Northeast Program Evaluation Center, 950 Campbell Avenue, West Haven, Connecticut 06516 (e-mail, ). He is also professor of psychiatry and public health at Yale Medical School. Parts of this paper were presented as the Carl Taube Award Lecture at the annual meeting of the American Public Health Association held November 11-16, 2000, in Boston.

References

1. Young AS, Klap R, Sherbourne C, et al: The quality of care for depressive and anxiety disorders in the United States. Archives of General Psychiatry 58:55-63, 2001Crossref, MedlineGoogle Scholar

2. Lehman AF, Steinwachs DM, and the co-investigators of the PORT project: Translating research into practice: the Schizophrenia Patients Outcomes Research Team (PORT) treatment recommendations. Schizophrenia Bulletin 24:1-10, 1998Crossref, MedlineGoogle Scholar

3. National Advisory Mental Health Council: Clinical Treatment and Services Research Workgroup: Bridging Science and Services. Rockville, Md, National Institute of Mental Health, 1999Google Scholar

4. Simon H: Administrative Behavior: A Study of Decision-Making Processes in Administrative Organizations, 4th ed. New York, Free Press, 1997Google Scholar

5. March J, Simon H: Organizations. New York, Wiley, 1958Google Scholar

6. Vaughn D: The Challenger Launch Decision: Risky Technology: Culture and Deviance at NASA. Chicago, University of Chicago Press, 1996Google Scholar

7. Corrigan PW, Steiner L, McCracken SG, et al: Strategies for disseminating evidence-based practices to staff who treat people with serious mental illness. Psychiatric Services 52:1598-1606, 2001LinkGoogle Scholar

8. March JG, Olsen JP: Ambiguity and Choice in Organizations. Bergen, Norway, Universitetsforlaget, 1976Google Scholar

9. Rosenheck RA, Cramer J, Xu W, et al: A comparison of clozapine and haloperidol in the treatment of hospitalized patients with refractory schizophrenia. New England Journal of Medicine 337:809-815, 1997Crossref, MedlineGoogle Scholar

10. Burns BJ, Santos AB: Assertive community treatment: an update of randomized trials. Psychiatric Services 46:669-675, 1995LinkGoogle Scholar

11. Drake RE, McHugo GJ, Bebout RR, et al: A randomized clinical trial of supported employment for inner-city patients with severe mental disorders. Archives of General Psychiatry 56:627-633, 1999Crossref, MedlineGoogle Scholar

12. Chassin M: Is health care ready for six sigma quality? Milbank Quarterly 76:565-591, 1998Google Scholar

13. Simon H: Rationality as process and product of thought. American Economic Review Proceedings 68:1-16, 1976Google Scholar

14. Thompson J: Organizations in Action. New York, McGraw-Hill, 1967Google Scholar

15. Dixon L, Lyels A, Scott J, et al: Services to families of adults with schizophrenia: from treatment recommendations to dissemination. Psychiatric Services 50:233-238, 1999LinkGoogle Scholar

16. McFarlane W, McNary S, Dixon L, et al: Predictors of dissemination of family psychoeducation in community mental health centers in Maine and Illinois. Psychiatric Services 52:935-942, 2000Google Scholar

17. Rosenheck RA, Neale MA: Development, implementation, and monitoring of intensive psychiatric community care in the Department of Veterans Affairs, in Achieving Quality in Psychiatric and Substance Abuse Practice: Concepts and Case Reports. Edited by Dickey B, Sederer L. Washington, DC, American Psychiatric Publishing, in pressGoogle Scholar

18. Rosenheck RA, Leda C, Gallup PG: Program design and clinical operation of two national VA Programs for homeless mentally ill veterans. New England Journal of Public Policy 8:315-337, 1992Google Scholar

19. Rosenheck RA, Fontana A: Changing patterns of care for war-related post-traumatic stress disorder at Department of Veterans Affairs medical centers: the use of performance data to guide program development. Military Medicine 164:795-802, 1999Crossref, MedlineGoogle Scholar

20. Fontana A, Rosenheck RA: Effectiveness and cost of inpatient treatment of posttraumatic stress disorder. American Journal of Psychiatry 154:758-765, 1997LinkGoogle Scholar

21. Carroll KM, Kadden RM, Donovan DM, et al: Implementing treatment and protecting the validity of the independent variable in treatment matching studies. Journal of Studies on Alcohol 58(suppl 12):149-155, 1997Google Scholar

22. Teague GB, Bond GR, Drake RE: Program fidelity in assertive community treatment: development and use of a measure. American Journal of Orthopsychiatry 68:216-232, 1998Crossref, MedlineGoogle Scholar

23. Kasprow WJ, Rosenheck RA, Chapdelaine J, et al: Health Care for Homeless Veterans Programs:13th Progress Report. West Haven, Conn, VA Northeast Program Evaluation Center, 2000Google Scholar

24. Seibyl CL, Rosenheck RA, Medak S, et al: The 11th Progress Report on the Domiciliary Care for Homeless Veterans Program. West Haven, Conn, VA Northeast Program Evaluation Center, 2000Google Scholar

25. Steadman H, Cocozza J, Lassiter MG, et al: Successful program maintenance when federal demonstration dollars stop: the ACCESS program for homeless mentally ill. Administration and Policy in Mental Health, in pressGoogle Scholar

26. Aldrich H: Organizations Evolving. Thousand Oaks, Calif, Sage, 1999Google Scholar

27. Weick K: Sensemaking in Organizations. Thousand Oaks, Calif, Sage, 1995Google Scholar

28. Senge P: The Fifth Discipline: The Art and Practice of the Learning Organization. New York, Doubleday, 1990Google Scholar

29. DiBella AJ: Learning Practices: Assessment and Action for Organizational Improvement. Upper Saddle River, NJ, Prentice-Hall, 2001Google Scholar

30. March JG: Organizational learning, in The Pursuit of Organizational Intelligence. Edited by March JG. Oxford, England, Blackwell, 1999Google Scholar