Skip to main content
Log in

Evaluating Disease Management Program Effectiveness

An Introduction to the Bootstrap Technique

  • Leading Article
  • Published:
Disease Management & Health Outcomes

Abstract

Disease management (DM) program evaluations are somewhat limited in scope because of typically small sample sizes comprising important subsets of the treated population. Identifying subsets of the data that have differing results from the aggregate of the whole program can lend insight into where, when, and how the program achieves its results. Additionally, there is a very limited set of classical tools available for the smaller sample sizes typically encountered in DM. Without readily available standard error and confidence interval (CI) calculations, the analyst may be fooled by specious details.

A method called the ‘bootstrap’ is introduced as a suitable technique for allowing DM program evaluators to use a broader array of quantities of interest and to extend inferences to the population based on results achieved in the program. The bootstrap uses the power of modern computers to generate many random samples from a given data set, allowing the use of repeated samples’ statistic (e.g. mean, proportion, and median). Using a congestive heart failure (CHF) program as an example, the bootstrap technique is used to extend a DM program evaluation beyond questions addressed using classical statistical inference: (i) how much of a median cost decrease can be expected as a result of the program?; (ii) did the program impact the highest and lowest costing members equally; and (iii) how much of a decrease in the proportion of patients experiencing a hospitalization can be expected as a result of the program?

The potential advantages of the bootstrap technique in DM program evaluation were clearly illustrated using this small CHF program example. A more robust understanding of program impact is possible when more tools and methods are available to the evaluator. This is particularly the case in DM, which is inherently biased in case-mix (e.g. strive to enroll sickest first), often has skewed distributions or outliers, and may suffer from small sample sizes.

The bootstrap technique creates distributions that allow for a more accurate method of drawing statistical inferences of a population. Moreover, since classical statistical inference techniques were designed specifically for parametric statistics (i.e. assuming a normal distribution), the bootstrap can be used for measures that have no convenient statistical formulae. Additionally, CIs can be defined around this statistic, making it a viable option for evaluating DM program effectiveness.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Table I
Fig. 3
Table II
Table III
Table IV
Fig. 4

Similar content being viewed by others

References

  1. American Healthways and the John Hopkins Consensus Conference. Consensus report: standard outcome metrics and evaluation methodology for disease management programs. Dis Manag 2003; 6(3): 121–38

    Article  Google Scholar 

  2. Linden A, Adams J, Roberts N. An assessment of the total population approach for evaluating disease management program effectiveness. Dis Manag 2003; 6(2): 93–102

    Article  PubMed  Google Scholar 

  3. Linden A, Adams J, Roberts N. Evaluating disease management program effectiveness: an introduction to time series analysis. Dis Manag 2003; 6(4): 243–55

    Article  PubMed  Google Scholar 

  4. Linden A, Adams J, Roberts N. Evaluation methods in disease management: determining program effectiveness. Position Paper for the Disease Management Association of America (DMAA). 2003 Oct

  5. Linden A, Adams J, Roberts N. Using propensity scores to construct comparable control groups for disease management program evaluation. Dis Manage Health Outcomes 2005; 13(2): 107–27

    Article  Google Scholar 

  6. Linden A, Roberts N. Disease management interventions: what’s in the black box? Dis Manag 2004; 7(4): 275–91

    Article  PubMed  Google Scholar 

  7. Linden A, Adams J, Roberts N. Evaluating disease management program effectiveness: an introduction to survival analysis. Dis Manag 2004; 7(3): 180–90

    Article  PubMed  Google Scholar 

  8. Linden A, Adams J, Roberts N. Using an empirical method for establishing clinical outcome targets in disease management programs. Dis Manag 2004; 7(2): 93–101

    Article  PubMed  Google Scholar 

  9. Mosteller F, Tukey J. Data analysis and regression. Reading (MA): Addison-Wesley, 1977

    Google Scholar 

  10. Efron B. Bootstrap methods: another look at the jackknife. JSTOR 1979; 7: 1–26

    Google Scholar 

  11. Efron B, Tibshirani R. An introduction to the bootstrap. New York: Chapman & Hall, 1993

    Google Scholar 

  12. Chernick MR. Bootstrap methods: a practitioner’s guide. New York: Wiley, 2000

    Google Scholar 

  13. Davison AC, Hinkley DV. Bootstrap methods and their applications. Cambridge: Cambridge University Press, 1997

    Google Scholar 

  14. Lunneborg CE. Data analysis by resampling; concepts and applications. Pacific Grove (CA): Brooks-Cole, 2000

    Google Scholar 

  15. Pearson ES. ’student’: a statistical biography of William Sealy Gosset, Edited and Augmented by R. L. Plackett with the Assistance of G. A. Barnard, Oxford: University Press, 1990

    Google Scholar 

  16. Disease Management Consortium, LLC, 2004 [online]. Available from URL: www.dismgmt.com [Accessed 2005 Jan 2]

  17. Vogt PW. Dictionary of statistics and methodology: a non-technical guide for the social sciences. 2nd ed. Thousand Oaks (CA): Sage, 1999

    Google Scholar 

  18. Blank S, Seiter C, Bruce P. Resampling Stats add-in for Excel user’s guide. Arlington (VA): Resampling Stats Inc, 2003

    Google Scholar 

  19. Ludbrook J, Dudley H. Why permutation tests are superior to t and F tests in biomedical research. Am Stat 1998; 52(2): 127–32

    Google Scholar 

  20. Reichardt CS, Gollob HF. Justifying the use and increasing the power of a t test for a randomized experiment with a convenience sample. Psychol Methods 1999; 4: 117–28

    Article  Google Scholar 

Download references

Acknowledgments

No sources of funding were used to assist in the preparation of this study. The authors have no conflicts of interest that are directly relevant to the content of this study.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ariel Linden.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Linden, A., Adams, J.L. & Roberts, N. Evaluating Disease Management Program Effectiveness. Dis-Manage-Health-Outcomes 13, 159–167 (2005). https://doi.org/10.2165/00115677-200513030-00002

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.2165/00115677-200513030-00002

Keywords

Navigation