The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
Published Online:

This article presents results from a cost-outcome study, the Texas Medication Algorithm Project (TMAP), which assessed an algorithm-based disease management program for adult outpatients with severe mental illness. The study was designed to assist practicing psychiatrists, who often must wade through myriad scientific articles to find optimal treatments best suited to the needs of individual patients.

TMAP began in 1995 as a public-academic collaborative effort between the Texas Department of Mental Health and Mental Retardation and the University of Texas Southwestern Medical Center in Dallas, the College of Pharmacy at Austin, and the Health Sciences Center at San Antonio. TMAP investigators developed and tested an algorithm-based medication management program with the clinical goal of reducing psychiatric symptom severity within acceptable levels of burden from medication side effects ( 1 , 2 , 3 , 4 , 5 , 6 ). The disease management program offered disease-specific, evidence-based medication algorithms for major depression, bipolar disorder, and schizophrenia, which were developed through expert consensus conferences. To facilitate algorithm adherence after an initial training program, clinicians received ongoing assistance from clinical coordinators, who reviewed charts, assessed patients, and assisted physicians to improve their adherence to the algorithms.

The study design ( 7 ), sample descriptions ( 10 , 11 , 12 ), analytic plan ( 8 ), and cost definitions ( 9 ) have been described elsewhere. In previous analyses, clinical symptoms were less severe during the first three months of treatment for patients assigned to physicians who used the algorithm-based program than for patients assigned to usual care. For the remaining nine months of the one-year study, these initial positive effect sizes were also shown to remain constant for major depression ( 10 ), to decline for bipolar disorder but not significantly ( 11 ), and to decline significantly for schizophrenia ( 12 ). These declining effect sizes were the result of usual care outcomes "catching up" to the outcomes in the algorithm-managed group.

This study compared differences in health care costs with previously reported differences in outcomes to determine the value of algorithm-based management in caring for severe mental illness. Similar to the outcomes analysis, the analysis in this study assessed costs for the initial three-month period and as a time trend for the remainder of the one-year study. To understand how costs were structured differently between the two programs, we computed costs by service type, broken down into the number of encounters and a cost per encounter and as a three-month initial cost and time trend. All care by licensed health professionals was considered for this study.

Methods

The prospective study compared algorithm-based management to usual care in 19 agency clinics matched for urban and rural location and ethnicity status. Patients were followed from one to two years. Twelve sites, four per disorder, were randomly assigned to provide algorithm-based management; the remaining seven sites were the usual-care sites. Consenting participants aged 18 years or older were treated by qualifying physicians for major depression, bipolar disorder, or schizophrenia; all participants required medication changes at the time of study entry because of unacceptable side effects or treatment nonresponse. Qualifying physicians were those who worked in only one of the agency's clinics and who were committed to working at least one day per week. Clinics were selected to be geographically separated to prevent patients from being treated at more than one of the sites.

As described elsewhere ( 7 , 10 , 11 , 12 ), trained assessors who were not involved in the clinical care of the patient administered structured face-to-face interviews at baseline and quarterly for at least one year. Primary clinical outcomes were the 30-item Inventory of Depressive Symptomatology Clinician Rating (IDS-C30) ( 13 , 14 , 15 ), which measures major depression; possible scores range from 0 to 84, and higher scores indicate more severe symptoms. For bipolar disorder, investigators administered the 24-item version of the Brief Psychiatric Rating Scale (BPRS-24) ( 16 , 17 ); possible scores range from 24 to 168. For schizophrenia, the 18-item version of the Brief Psychiatric Rating Scale ( 18 ) was administered; possible scores range from 18 to 126. Higher scores on both indicated more severe symptoms.

Cost and utilization information came from providers' medical and billing records. Providers were identified through the patient-administered Utilization and Cost Questionnaire ( 9 , 19 ). Patient self-reports were substituted when provider records were unavailable. Patient self-reports were calibrated and cost weighted by comparing patient responses with provider information when available. The number of days of active prescription orders was obtained by abstracting agency medical records ( 12 ). The absence of an active order in the medical chart can result when medications are not prescribed, when patients miss visits and therefore do not have active medication orders, or when prescriptions are not accurately documented.

An encounter is defined as a contact between the patient and a provider on any given day. Encounters were counted separately by service type defined by setting (inpatient day, outpatient visit, pharmacy medication order day [see below]) and by provider (agency and nonagency). "Agency" refers to all salaried staff and contract-licensed health care professionals and facilities operated by or contracted through the Texas Department of Mental Health and Mental Retardation. "Nonagency" includes mostly private community-based providers not affiliated with the Texas Department of Mental Health and Mental Retardation. Agency services were further classified by type (pharmacy, physician, clinical coordinator, and other nonphysician). The cost for any given encounter was computed for inpatient care at a per diem diagnosis-related group (DRG) code ( 20 ) and for outpatient care as a total of procedure CPT codes ( 21 ). Both DRG per diem costs and the costs for CPT procedures were priced at a uniform charge schedule in U.S. 2001 dollars.

To compare agency and nonagency costs, we used uniform charges from the Department of Veterans Affairs Reasonable Charges program ( 22 ) (version 1.2, effective May 8, 2001), which describes the 80th percentile of paid transaction charges representing nongovernment, licensed U.S. health care providers, as described in federal regulations ( 22 , 23 ). The charges were used to compute third-party bills ( 24 ), federal tort claims, and intergovernmental health services agreements. Psychotropic medication orders were measured in days on the basis of medications prescribed ( 25 ); prices were based on allowable Medicaid rates for the state of Texas.

A total of 280 different procedures were found in the provider records for participants in the study. Among these, 157 representing psychosocial services were not classified by a CPT code. For these unclassified procedures, reasonable charges were estimated by prorating professional costs (professional time multiplied by the mean salary rate for the agency). The costs were calculated on the basis of 108,584 adult patients in the participating agencies, who received 5,218,208 procedures between 1996 and 2001.

Both outcomes and costs were "adjusted" for baseline psychiatric symptoms, length of illness, patient age, years of education, patient perception of benefits from care, family size, disposable income, Latino and African-American status, and gender. For each service type, logistic regression was used to compute an encounter rate—or the likelihood that on any given day the patient would have an encounter. Linear regression was used to estimate the cost per encounter given the patient had an encounter on the given day. Time and time-treatment interactions were entered as predictor variables. By extending the analysis to the "granularity" of each day, this strategy handles the bimodal and skewed distributions that are typical when conducting analyses of actual cost data. An expected cost for any given day was computed by multiplying the likelihood of an encounter by the expected cost per encounter and summing over all service types. Quarterly and annualized costs were computed by summing expected daily costs over the appropriate number of days.

Annualized costs can thus be broken down into first-quarter costs and a time trend. First-quarter costs can be divided into an encounter rate (number of encounters per 90-day quarters) and a cost per encounter. Time trends can be divided into the change in encounter rate and the change in encounter costs per day. Estimates were based on exact service dates available from the medical record. However, for presentation purposes, these trends were computed to reflect changes over 90-day "quarters." Because the error distribution of collected costs and cost-outcome ratios are complex, significance tests and confidence intervals were determined by bootstrapping 100,000 samples ( 26 ).

Results

Between March 1998 and April 1999, TMAP enrolled 1,421 patients. The final analytic sample was reduced to 926 patients in order to match patients by treatment group for baseline characteristics. Data collection ended in April 2000. This study examined annualized adjusted costs during the first year after baseline ( Table 1 ). The costs were broken down by first-quarter costs and quarterly time trends ( Table 2 ), into encounter rates and per encounter costs ( Table 3 ), and by agency physician procedures ( Table 4 ). Cost-outcome ratios from agency costs were calculated ( Table 5 ).

Table 1 Annual adjusted costs for health and mental health services among participants in the Texas Medication Algorithm project, by disorder and treatment group
Table 1 Annual adjusted costs for health and mental health services among participants in the Texas Medication Algorithm project, by disorder and treatment group
Enlarge table
Table 2 First-quarter costs and time trends among participants in the Texas Medication Algorithm project, by disorder and treatment group
Table 2 First-quarter costs and time trends among participants in the Texas Medication Algorithm project, by disorder and treatment group
Enlarge table
Table 3 Adjusted first-quarter and quarterly growth rates in encounters and costs of encounters for participants in the Texas Medication Algorithm Project, by disorder, treatment group, and service type a

a Adjusted for patient perception of benefits, education in years, family household size, monthly disposable income, Latino status, African-American status, and gender. Adjusted for disorder-specific factors: for major depression, baseline Inventory of Depressive Symptomatology-Clinical Version and patient-reported length of illness in years; for bipolar disorder, baseline 24-item version of the Brief Psychiatric Rating Scale (BPRS) and patient age; for schizophrenia, baseline 18-item version of the BPRS and patient age

Table 3 Adjusted first-quarter and quarterly growth rates in encounters and costs of encounters for participants in the Texas Medication Algorithm Project, by disorder, treatment group, and service type a

a Adjusted for patient perception of benefits, education in years, family household size, monthly disposable income, Latino status, African-American status, and gender. Adjusted for disorder-specific factors: for major depression, baseline Inventory of Depressive Symptomatology-Clinical Version and patient-reported length of illness in years; for bipolar disorder, baseline 24-item version of the Brief Psychiatric Rating Scale (BPRS) and patient age; for schizophrenia, baseline 18-item version of the BPRS and patient age

Enlarge table
Table 4 First-quarter procedures provided by agency physicians to participants in the Texas Medication Algorithm Project, by disorder and treatment group
Table 4 First-quarter procedures provided by agency physicians to participants in the Texas Medication Algorithm Project, by disorder and treatment group
Enlarge table
Table 5 Outcomes and cost differences between participants in the Texas Medication Algorithm Project who received usual care or algorithm-based care, by disorder
Table 5 Outcomes and cost differences between participants in the Texas Medication Algorithm Project who received usual care or algorithm-based care, by disorder
Enlarge table

Total annual costs

Table 1 displays annualized, adjusted, per patient costs, by treatment group and service type. For major depression, the total costs of care incurred by patients in the algorithm-based management group were $11,806 higher (50 percent higher) than the costs for patients in usual care. Of this difference, $5,541 (47 percent) came from higher costs of nonagency providers, $3,320 (28 percent) came from agency inpatient care, $1,812 (15 percent) came from agency clinical coordinators, and $1,192 (10 percent) came from agency psychotropic medications. Offsetting these differences were the lower agency costs for nonphysicians in algorithm-based care—a difference of $597 (5 percent). In contrast, for bipolar disorder and schizophrenia, the annual adjusted cost for algorithm-based care was not significantly different from the cost of usual care.

Of interest to public mental health providers is how agency costs alone may have differed between algorithm-based care and usual care. These analyses are appropriate because agency and nonagency costs were uncorrelated within disorder, suggesting that nonagency services were not substitutes for or complements to agency services. Again, results differed by disorder. As shown in Table 1 , agency costs for major depression were $6,365 (62 percent) higher for patients in algorithm-based care than for patients in usual care. The difference was split between higher inpatient costs for algorithm-based care ($3,320, 52 percent) and higher outpatient costs ($3,046, 48 percent). On the other hand, agency costs were 22 percent lower for patients with bipolar disorder in algorithm-based care than for patients in usual care, which was driven by inpatient costs. Finally, for schizophrenia, no statistically significant difference in agency costs were found between treatment groups, because higher inpatient costs in algorithm-based care were offset, in part, by lower outpatient nonphysician costs.

Only algorithm-based care provided clinical coordinators, emphasized psychotropic medication algorithms, and provided expert consultants. Thus some additional costs were expected. The adjusted, annualized, per patient cost for clinical coordinators in algorithm-based care was $1,812 for major depression, $1,032 for bipolar disorder, and $898 for schizophrenia, representing 11 percent, 8 percent, and 5 percent of total agency costs, respectively ( Table 1 ). Patients in algorithm-based care were also given more prescriptions for psychotropic medications than patients in the usual-care group. Compared with patients in usual care, patients in algorithm-based care had higher medication costs by $1,192 (98 percent higher) for major depression, $1,491 (93 percent) for bipolar disorder, and $2,736 (148 percent) for schizophrenia. These differences represent 7 percent, 12 percent, and 14 percent of total per patient agency costs for the respective disorders ( Table 1 ).

As shown in Table 1 , patients with major depression in algorithm-based care incurred an additional $639 in agency costs for physician services (51 percent higher than usual care), representing 4 percent of total agency costs for algorithm-based care. The physician costs for bipolar disorder were 36 percent lower than for usual care, representing 5 percent of agency costs for algorithm-based care. No statistically significant differences in costs were found for schizophrenia. For all three disorders, patients in algorithm-based care incurred lower nonphysician agency costs than patients in usual care—20 percent, 40 percent, and 65 percent, respectively, which represents 4 percent, 14 percent, and 28 percent of total agency costs.

Time trends

Table 2 separates annual costs into costs for the first quarter and a time trend thereafter. Overall, first-quarter differences in agency or all-care costs trended toward zero. For major depression, first-quarter agency costs for patients in algorithm-based care were $2,150 higher than for patients in usual care. However, this cost difference diminished by $480 for the second and subsequent quarters, a result of a decrease in costs for algorithm-based care (-$408) and an increase for usual care ($72). For bipolar disorder, on the other hand, first-quarter agency costs for patients in algorithm-based care were $1,540 lower than for patients in usual care, a difference driven by lower costs for inpatient care that in time regressed to zero.

Costs for clinical coordinators fell significantly each quarter. In contrast, for major depression, differences in costs between algorithm-based care and usual care for prescribed psychotropic medications actually increased with time, reflecting declining costs among usual-care patients and sustained costs among patients in algorithm-based care. Costs for agency physician services also fell over time but at a faster rate for algorithm-based care than for usual care, although between-group differences in rates were significant only for schizophrenia.

Encounter rates and encounter costs

Table 3 breaks down adjusted costs into an encounter rate and a cost per encounter. Each of these is expressed as a first-quarter mean and a quarterly time trend. The first-quarter encounter rate is expressed as a percentage of days that patients had an encounter. For example, in Table 3 , a 2.2 percent encounter rate means that for every 100 days, patients with major depression in usual care would see an agency physician a mean of 2.2 days. Between-group differences in encounter rates are expressed as odds ratios. Thus the 4.9 percent encounter rate for patients with major depression in algorithm-based care indicates that these patients are 2.3 times more likely to have a physician encounter during the first quarter than their usual-care counterparts with a 2.2 percent encounter rate. Time trends are expressed as odds ratios and reflect changes in encounter rates over a 90-day interval. Thus, for usual care, an odds ratio of .89 means that after 90 days, the encounter rate is expected to be only 89 percent of its prior level. Therefore, the 2.2 percent first-quarter encounter rate dropped to 1.6 percent by the fourth quarter. Algorithm-based care experienced a faster decline at 77 percent, so that the 4.9 percent first-quarter rate diminished to 2.3 percent by the fourth quarter. Thus the first-quarter between-group difference of 2.3 (odds ratio) diminished by 86 percent each quarter, for a fourth-quarter between-group difference of 1.6.

Costs were calculated on the basis of 2001 reasonable charges, or the 80th percentile of paid transaction charges in the United States to nongovernment providers. Thus, in the example, the average cost per encounter was $176 for usual care and $124 for algorithm-based care, or a difference of -$52. After every quarter, the per encounter cost increased for patients in usual care by $3 and for those in algorithm-based care by $16, with the patients in algorithm-based care experiencing a greater increase in per encounter costs of $13.

Overall, lower encounter rates were often offset by higher costs per encounter. A clear exception was the falling quarterly costs for clinical coordinators, which was the result of falling encounter rates but stable costs per encounter. The stable costs probably resulted from clinical coordinators' following regimented procedures ( Table 3 ).

As shown in Table 3 , patients in algorithm-based management had more contacts with agency physicians than usual-care patients but at lower costs per encounter. Although the differences diminished, rates for algorithm-based management remained higher throughout the one-year follow-up period. Patients with bipolar disorder and schizophrenia in algorithm-based management actually had lower annualized agency physician costs ( Table 1 ), because their greater encounter rates were offset by lower costs per physician encounter ( Table 3 ).

The higher costs for psychotropic medication in algorithm-based care ( Table 2 ) were the result of more days with active orders and higher costs per prescription day ( Table 3 ). Costs for prescribed psychotropic medications did not change significantly over time, a result of falling prescription days that were offset by rising per diem prescription costs ( Table 3 ).

Agency inpatient psychiatric care had an important impact on determining total agency costs. Among the 926 study patients, 143 (15.4 percent) had at least one agency inpatient psychiatric stay during the one-year follow-up, with rates varying by disorder: 9.4 percent for major depression, 17.2 percent for bipolar disorder, and 20.7 percent for schizophrenia (χ 2 =16.9, df=2, p<.001); the differences were driven by changes in encounter rates, which regressed to the mean ( Table 3 ).

A total of 706 (76.2 percent) of 926 participants received general medical or mental health services from nonagency providers during the one-year observation, including 173 (18.7 percent) who reported inpatient stays. Use rates varied by disorder: 82.6 percent for major depression, 81.3 percent for bipolar disorder, and 64.7 percent for schizophrenia (χ 2 =34, df=2, p<.001).

Procedures per encounter

Table 4 presents agency physician procedures to explain per encounter costs. The lower adjusted costs per physician encounter in algorithm-based care ( Table 3 ) were the result of fewer procedures during a single encounter and, for bipolar and schizophrenia, lower mean costs per procedure ( Table 4 ). Physicians in both algorithm-based care and usual care offered medication-related services (CPT code 90862, pharmacy management with script use and review), but usual-care physicians tended to offer medication-related psychotherapy (CPT codes 90805, 90807, 90809, and G0074).

Clinical coordinators provided patients in algorithm-based management an average of 1.14 procedures per encounter, or 5.24 procedures for the quarter, at a mean reasonable charge of $112.74 per procedure, or $128.29 per encounter. Among 2,260 procedures clinical coordinators provided, 875 (38.7 percent) were for psychiatric diagnostic interview examinations and rating scales, 355 (15.7 percent) for eligibility assessment, 271 (12.0 percent) for community support, 231 (10.2 percent) for pharmacy management and medication management, 222 (9.8 percent) for service coordination, 173 (7.7 percent) for case coordination, 68 (3.0 percent) for rehabilitative treatment plan oversight, and 65 (2.9 percent) for all other services.

Cost-outcome ratios

Table 5 presents estimates of cost-outcome ratios (scaled so that symptom reductions are represented as positive values) and estimated likelihoods from bootstrapped samples. These analyses indicated that compared with usual care, algorithm-based care was cost-effective (better outcomes and lower costs), cost-ineffective (worse outcomes and higher costs), or mixed cost-effective (worse outcomes but lower costs or better outcomes but at a higher cost).

The cost-outcome ratio describes the change in costs divided by any change in outcomes observed between algorithm-based treatment and usual care.

Algorithm-based care for major depression was essentially mixed-effective (in 98.8 percent of bootstrapped samples). That is, patients in algorithm-based care had less severe symptoms but at a higher quarterly cost of $377 for each quarterly point reduction in symptom severity (measured by the IDS-C30). During the first quarter, the cost for patients in algorithm-based care was $480 more for each point reduction, with costs actually falling over time by an average of $97 per quarter. The drop in costs was driven mostly by declining agency costs. Accounting for the higher agency costs in algorithm-based care were clinical coordinators (28 percent), physicians (10 percent), medications (19 percent), and inpatient care (52 percent); offsetting these costs was agency nonphysician care (-9 percent) ( Table 1 ).

For bipolar disorder, algorithm-based care was essentially cost-effective for both the first quarter and the year (98 percent and 94 percent of bootstrapped samples, respectively). For the first quarter, patients in algorithm-based care got better and at a lower quarterly cost of $457 for each additional point reduction in psychiatric symptoms measured with the BPRS-24. However, cost per symptom advantage declined by $116 per quarter, driven mostly by increasing agency costs ( Tables 2 and 5 ). The cost savings came from lower agency inpatient care (-105 percent), nonphysician care (-48 percent), and physician care (-17 percent), offsetting higher costs for clinical coordinators (28 percent) and prescribed psychotropic medications (41 percent).

For schizophrenia, findings were somewhat inconclusive, because a cost-ineffective finding could not be dismissed with these data (11 percent of bootstrapped samples). For the first quarter, patients with schizophrenia in algorithm-based care experienced less severe symptoms, although these differences declined as patients in usual care actually improved relative to their counterparts in algorithm-based care. Cost differences were not statistically significant.

Discussion

The Texas Medication Algorithm Project was a novel disease management experiment intended to improve outcomes with limited resources in a publicly supported mental health system. The study aimed to determine cost outcomes for programs that combined consensus-based medication algorithms with clinical care coordinators, patient education, enhanced clinical documentation, and expert consultants. The findings of the study are important to state health departments considering TMAP as a model to improve mental health services ( 27 ).

Compared with usual care, algorithm-based care of major depression was associated with better outcomes that were sustained over a one-year follow-up but at higher, though declining, costs. Algorithm-based care of bipolar disorder was cost-effective, associated with both better outcomes and lower, though increasing, overall costs. Algorithm-based care of schizophrenia was associated with better initial outcomes that tended to decline with time; however, no significant difference was found in the cost of care.

Our extensive cost analyses were designed to break down annual costs into more meaningful component parts, including program status and provider type, encounter rates, and cost per encounter to describe how costs and outcomes unfold over time. These analyses revealed differences in the way costs were structured between the two programs across services and providers and over time. Patients in algorithm-based care saw physicians in the mental health department more often than their usual-care counterparts, although costs were offset in part by lower per encounter costs because physicians generally offered fewer psychosocial procedures. Most differences in costs occurred during the first quarter and afterwards regressed to the mean.

These results underscore the role of clinical coordinators in helping translate evidence-based guidelines in actual clinical practice settings. Although patient encounters with clinical coordinators tended to decline with time, the mix of procedures that were provided during visits to clinical coordinators were consistent over time and across patients, suggesting that clinical coordinators followed prescribed regimens of care. For all three disorders, patients in algorithm-based management had more frequent visits—with fewer procedures per visit—to their agency physician, who, in turn, offered more medication-related services and fewer psychotherapy procedures than were offered in usual care. These results parallel findings from earlier studies in which psychiatric nurses who shared results of a structured clinical diagnostic interview led physicians to order more diagnostic and evaluation visits, to rediagnose patients' conditions, and to change prescribed medications ( 28 ). Similarly, depression care specialists have been found to improve outcomes among elderly depressed patients in primary care settings ( 29 ).

One might speculate that procedures provided by clinical coordinators may have served as "substitutes" for nonphysician agency services (for example, case management and patient advocacy), because patients in algorithm-based care made fewer visits to, and received less costly procedures from, these nonphysician agency providers. However, little evidence exists to support this notion. For example, use of agency nonphysician services by patients in algorithm-based care did not increase over time in the face of declining use of clinical coordinators. Furthermore, although patient encounters with agency nonphysicians were often associated with the disposable income of the patients and with family size (p<.05), no such relationships were found for clinical coordinators. Further study is warranted.

It is not known why algorithm-based management was associated with a greater number of active medication days, although these differences diminished with time. More frequent physician visits and expanded roles of clinical coordinators are possible explanations. Because clinics providing algorithm-based care used special chart documentation forms, it is likely that differences are partly attributable to improved documentation in charts of patients in algorithm-based care. If so, then differences in costs of psychotropic medications between algorithm-based care and usual care were much less than these results suggest. On the other hand, patient education and regular monitoring of symptoms and side effects may have encouraged patients to keep appointments and physicians to regularly prescribe medication. Further research is needed.

Our findings underscore the usefulness of study innovations. Because costs were separated into components, investigators were able to distinguish program cost implications in the presence of utilization noise. Although correlated errors across models of these separate components do raise additional caution ( 9 ), potential biases on standard error estimates were corrected by computing significance tests using 100,000 bootstrapped samples. Furthermore, there was little evidence that agency care acted as a substitute or complement to nonagency care. For example, no significant association was found between nonagency and agency outpatient physician encounters; a 1 percent increase in nonagency outpatient visits was associated with only a .03 percent increase in agency physician visits (p=.63). Combining nonagency with agency costs did not alter the study's conclusions.

Other study limitations abound. Algorithm-based care and usual care were assigned to matched clinics to avoid treatment blends (both groups treated by the same physician) and water-cooler effects (both groups treated within the same clinic) ( 10 , 11 , 12 ). Relative costs of medications will vary when prices other than Texas allowable Medicaid fees are substituted. Adjustments for patient baseline differences may not fully account for all preexisting relevant differences. Psychiatric symptom measures were specific to a disorder and do not reflect all aspects of patient outcomes. Algorithm-based management focused on medications and was not designed to influence other services, such as case management and counseling. Adjustments are subject to potential model misspecification error. Variation in findings across disorders in the context of these limitations points to the need for further studies, such as testing new algorithms over a longer period and providing a broader range of services, such as psychotherapy, addiction treatment, and social services; determining whether self-reports can substitute for clinician ratings in clinical decision making ( 30 , 31 ); and examining the impact of staff organization and practice procedures on costs and health outcomes ( 29 ). Furthermore, study findings do not generalize to patients in residential treatment, who were excluded from the study. Also, costs do not include the costs of algorithm development or physician training. Finally, the many reported estimates inflate type I error. However, a simple comparison of total annual costs masks the subtle differences in the pattern of using care across providers and service types, and over time, patterns that these analyses were designed to reveal.

Conclusions

This study found that an algorithm-based disease management program that included clinical guideline coordinators, patient and family education, routine assessments of symptoms and side effects at each clinic visit, and expert consultations may be effective in improving psychiatric symptom outcomes. Although more frequent physician visits and greater use of medications are to be expected, the bottom-line impact on agency and nonagency costs may vary across disorders and over time. Future studies should focus on medications and dosages actually prescribed and the association between algorithm adherence and clinical outcomes.

Acknowledgments

This project was supported in part by the Robert Wood Johnson Foundation (grant 031023), the Meadows Foundation (grant 97040055), the National Institute of Mental Health (grant 5-R24-MH53799), Mental Health Connections and the Texas Department of Mental Health and Mental Retardation, a Research Career Scientist Award from the Department of Veterans Affairs Health Services Research and Development (RCS 92-403), the Nanny Hogan Boyd Charitable Trust, the Betty Jo Hay Distinguished Chair in Mental Health, the Rosewood Corporation Chair in Biomedical Science, and the department of psychiatry of the University of Texas Southwestern Medical Center at Dallas. The following companies provided unrestricted educational grants for the training of providers and medications at no cost to the project: AstraZeneca Pharmaceuticals, Abbott Laboratories, Bristol-Myers Squibb Company, Eli Lilly and Company, Forest Laboratories, Glaxo Wellcome, Inc., Janssen Pharmaceutica, Novartis Pharmaceuticals Corporation, Organon Inc., Pfizer, Inc., SmithKline Beecham, and Wyeth-Ayerst Laboratories. The U.S. Pharmacopoeia provided materials for training purposes.

Dr. Kashner, Dr. Rush, Dr. Carmody, Dr. Trivedi, Ms. Wicker, and Dr. Suppes are affiliated with the department of psychiatry at the University of Texas Southwestern Medical Center, 5323 Harry Hines Boulevard, Dallas, Texas 75390-9086 (e-mail, [email protected]). Dr. Kashner and Ms. Wicker are also with the North Texas Health Care System at the Department of Veterans Affairs in Dallas. Dr. Crismon and Dr. Toprac are affiliated with the Texas Department of Mental Health and Mental Retardation in Austin. Dr. Crismon is also with the College of Pharmacy at the University of Texas at Austin. Dr. Miller is with the department of psychiatry at the University of Texas Health Sciences Center at San Antonio.

References

1. Suppes T, Swann A, Dennehy EB, et al: Texas Medication Algorithm Project: development and feasibility testing of a treatment algorithm for patients with bipolar disorder. Journal of Clinical Psychiatry 62:439-447, 2001Google Scholar

2. Gilbert DA, Altshuler KZ, Rago WV, et al: Texas Medication Algorithm Project: definitions, rationale, and methods to develop medication algorithms. Journal of Clinical Psychiatry 59:345-351, 1998Google Scholar

3. Rush AJ, Rago WV, Crismon ML, et al: Medication treatment for the severely and persistently mentally ill: the Texas Medication Algorithm Project. Journal of Clinical Psychiatry 60:284-291, 1999Google Scholar

4. Rush AJ, Crismon ML, Toprac M, et al: Consensus guidelines in the treatment of major depressive disorder. Journal of Clinical Psychiatry 59(suppl 20):73-84, 1998Google Scholar

5. Crismon ML, Trivedi M, Pigott TA, et al: The Texas Medication Algorithm Project: report of the Texas Consensus Conference Panel on medication treatment of major depressive disorder. Journal of Clinical Psychiatry 60:142-156, 1999Google Scholar

6. Miller AL, Chiles JA, Chiles JK, et al: The Texas Medication Algorithm Project schizophrenia algorithms. Journal of Clinical Psychiatry 60:649-657, 1999Google Scholar

7. Rush AJ, Crismon ML, Kashner TM, et al: Texas Medication Algorithm Project, phase 3 (TMAP-3): rationale and study design. Journal of Clinical Psychiatry 64:357-369, 2003Google Scholar

8. Kashner TM, Carmody TJ, Suppes T, et al: Catching-up on health outcomes: the Texas Medication Algorithm Project. Health Services Research 38:311-331, 2003Google Scholar

9. Kashner TM, Rush AJ, Altshuler KZ: Measuring costs of guideline-driven mental health care: the Texas Medication Algorithm Project. Journal of Mental Health Policy and Economics 2:111-121, 1999Google Scholar

10. Trivedi MH, Rush AJ, Crismon ML, et al: The Texas Medication Algorithm Project (TMAP): clinical results for patients with major depressive disorder. Archives of General Psychiatry 61:669-680, 2004Google Scholar

11. Suppes T, Rush AJ, Dennehy EB, et al: Texas Medication Algorithm Project, phase 3 (TMAP): clinical results for patients with a history of mania. Journal of Clinical Psychiatry 64:370-382, 2003Google Scholar

12. Miller AL, Crismon ML, Rush AJ, et al: The Texas Medication Algorithm Project (TMAP): clinical results for patients with schizophrenia. Schizophrenia Bulletin 30:627-647, 2004Google Scholar

13. Rush AJ, Giles DE, Schlesser MA, et al: The Inventory of Depressive Symptomatology (IDS): preliminary findings. Psychiatric Research 18:65-87, 1986Google Scholar

14. Surís A, Kashner TM, Gillapsy JA, et al: Validation of the Inventory of Depressive Symptomatology (IDS) in cocaine dependent inmates. Journal of Offender Rehabilitation 32:15-30, 2001Google Scholar

15. Rush AJ, Gullion CM, Basco MR, et al: The inventory of depressive symptomatology (IDS): psychometric properties. Psychological Medicine 26:477-486, 1996Google Scholar

16. Overall JE, Gorham DR: Introduction: the Brief Psychiatric Rating Scale (BPRS): recent developments in ascertainment and scaling. Psychopharmacological Bulletin 24:97-99, 1988Google Scholar

17. Ventura J, Nuechterlein KH, Subotnik K, et al: Symptom dimensions in recent-onset schizophrenia: the 24-item expanded BPRS. Schizophrenia Research 15:22, 1995Google Scholar

18. Ventura J, Green MF, Shaner A, et al: Training and quality assurance with the Brief Psychiatric Rating Scale: the drift busters. International Journal of Methods of Psychiatric Research 221-244, 1993Google Scholar

19. Kashner TM, Suppes T, Rush AJ, et al: Measuring use of outpatient care among mentally ill individuals: a comparison of self-reports and provider records. Evaluation and Program Planning 22:31-39, 1999Google Scholar

20. St Anthony's DRG Guidebook. Reston, Va, St Anthony, 2002Google Scholar

21. Current Procedural Terminology. Atlanta, Ga, AMA Press, 2002Google Scholar

22. Department of Veterans Affairs, Reasonable Charges, version 1.2, 66 (89) FR 23326 (5/8/2001), 64 FR 22676 (10/1/1999), 38 CFR 17.101. Based on the Balanced Budget Act of 1997, PL 105-33, Sec 8023(d), 38 USC 1729Google Scholar

23. Department of Veterans Affairs, Reasonable Charges Version 1.4, 68(82) FR 22773 (4/29/2003), 68(82) FR 22966 (4/29/2003), 38 CFR 17.101. Based on the Balanced Budget Act of 1997, PL 105-33, Sec 8023(d), 38 USC 1729Google Scholar

24. Balanced Budget Act of 1997, PL 105-33, Sec 8023(d), 38 USC 1729Google Scholar

25. The National Drug Code Directory. Washington, DC, US Food and Drug Administration. Available at www.fda.gov/cder/ndcGoogle Scholar

26. Davison AC, Hinkley DV: Bootstrap Methods and Their Applications. Cambridge, UK, Cambridge University Press, 1997Google Scholar

27. Dewan NA, Conley D, Svendsen D, et al: A quality improvement process for implementing the Texas Algorithm for Schizophrenia in Ohio. Psychiatric Services 54:1646-1649, 2003Google Scholar

28. Kashner TM, Rush AJ, Surís A, et al: Impact of structured clinical interviews on physician behavior in community mental health settings. Psychiatric Services 54:712-718, 2003Google Scholar

29. Unützer J, Katon W, Callahan CM, et al: Collaborative care management of late-life depression in the primary care setting. JAMA 289:2836-2845, 2002Google Scholar

30. Trivedi MH, Rush AJ, Ibrahim H, et al: The Inventory of Depressive Symptomatology, Clinician Rating (IDS-C), and Self-Report (IDS-SR), and the Quick Inventory of Depressive Symptomatology, Clinician Rating (QIDS-C), and Self-Report (QIDS-SR) in public sector patients with mood disorders: a psychometric evaluation. Psychological Medicine 34:73-82, 2004Google Scholar

31. Rush AJ, Trivedi MH, Ibrahim HM, et al: The 16-Item Quick Inventory of Depressive Symptomatology (QIDS), Clinician Rating (QIDS-C) and Self-Report (QIDS-SR): a psychometric evaluation in patients with chronic major depression. Biological Psychiatry 54:573-583, 2003Google Scholar