Skip to main content
Top
Published in: BMC Medical Research Methodology 1/2016

Open Access 01-12-2016 | Research article

Application of a nonrandomized stepped wedge design to evaluate an evidence-based quality improvement intervention: a proof of concept using simulated data on patient-centered medical homes

Authors: Alexis K. Huynh, Martin L. Lee, Melissa M. Farmer, Lisa V. Rubenstein

Published in: BMC Medical Research Methodology | Issue 1/2016

Login to get access

Abstract

Background

Stepped wedge designs have gained recognition as a method for rigorously assessing implementation of evidence-based quality improvement interventions (QIIs) across multiple healthcare sites. In theory, this design uses random assignment of sites to successive QII implementation start dates based on a timeline determined by evaluators. However, in practice, QII timing is often controlled more by site readiness. We propose an alternate version of the stepped wedge design that does not assume the randomized timing of implementation while retaining the method’s analytic advantages and applying to a broader set of evaluations. To test the feasibility of a nonrandomized stepped wedge design, we developed simulated data on patient care experiences and on QII implementation that had the structures and features of the expected data from a planned QII. We then applied the design in anticipation of performing an actual QII evaluation.

Methods

We used simulated data on 108,000 patients to model nonrandomized stepped wedge results from QII implementation across nine primary care sites over 12 quarters. The outcome we simulated was change in a single self-administered question on access to care used by Veterans Health Administration (VA), based in the United States, as part of its quarterly patient ratings of quality of care. Our main predictors were QII exposure and time. Based on study hypotheses, we assigned values of 4 to 11 % for improvement in access when sites were first exposed to implementation and 1 to 3 % improvement in each ensuing time period thereafter when sites continued with implementation. We included site-level (practice size) and respondent-level (gender, race/ethnicity) characteristics that might account for nonrandomized timing in site implementation of the QII. We analyzed the resulting data as a repeated cross-sectional model using HLM 7 with a three-level hierarchical data structure and an ordinal outcome. Levels in the data structure included patient ratings, timing of adoption of the QII, and primary care site.

Results

We were able to demonstrate a statistically significant improvement in adoption of the QII, as postulated in our simulation. The linear time trend while sites were in the control state was not significant, also as expected in the real life scenario of the example QII.

Conclusions

We concluded that the nonrandomized stepped wedge design was feasible within the parameters of our planned QII with its data structure and content. Our statistical approach may be applicable to similar evaluations.
Appendix
Available only for authorised users
Literature
1.
go back to reference West SG, Duan N, Pequegnat W, Gaist P, Des Jarlais DC, Holtgrave D, et al. Alternatives to the randomized controlled trial. Am J Public Health. 2008;98(8):1359–66.CrossRefPubMedPubMedCentral West SG, Duan N, Pequegnat W, Gaist P, Des Jarlais DC, Holtgrave D, et al. Alternatives to the randomized controlled trial. Am J Public Health. 2008;98(8):1359–66.CrossRefPubMedPubMedCentral
2.
go back to reference Ten Have TR, Normand S-LT, Marcus SM, Brown CH, Lavori P, Duan N. Intent-to-treat vs. non-intent-to-treat analyses under treatment non-adherence in mental health randomized trials. Psychiatr Ann. 2008;38(12):772.CrossRefPubMedPubMedCentral Ten Have TR, Normand S-LT, Marcus SM, Brown CH, Lavori P, Duan N. Intent-to-treat vs. non-intent-to-treat analyses under treatment non-adherence in mental health randomized trials. Psychiatr Ann. 2008;38(12):772.CrossRefPubMedPubMedCentral
3.
go back to reference Woertman W, de Hoop E, Moerbeek M, Zuidema SU, Gerritsen DL, Teerenstra S. Stepped wedge designs could reduce the required sample size in cluster randomized trials. J Clin Epidemiol. 2013;66(7):752–8.CrossRefPubMed Woertman W, de Hoop E, Moerbeek M, Zuidema SU, Gerritsen DL, Teerenstra S. Stepped wedge designs could reduce the required sample size in cluster randomized trials. J Clin Epidemiol. 2013;66(7):752–8.CrossRefPubMed
4.
go back to reference Hussey MA, Hughes JP. Design and analysis of stepped wedge cluster randomized trials. Contemp Clin Trials. 2007;28(2):182–91.CrossRefPubMed Hussey MA, Hughes JP. Design and analysis of stepped wedge cluster randomized trials. Contemp Clin Trials. 2007;28(2):182–91.CrossRefPubMed
6.
go back to reference Prost A, Binik A, Abubakar I, Roy A, De Allegri M, Mouchoux C, et al. Logistic, ethical, and political dimensions of stepped wedge trials: critical review and case studies. Trials. 2015;16(1):351.CrossRefPubMedPubMedCentral Prost A, Binik A, Abubakar I, Roy A, De Allegri M, Mouchoux C, et al. Logistic, ethical, and political dimensions of stepped wedge trials: critical review and case studies. Trials. 2015;16(1):351.CrossRefPubMedPubMedCentral
7.
go back to reference Mdege ND, Man M-S, Taylor CA, Torgerson DJ. Systematic review of stepped wedge cluster randomized trials shows that design is particularly used to evaluate interventions during routine implementation. J Clin Epidemiol. 2011;64(9):936–48.CrossRefPubMed Mdege ND, Man M-S, Taylor CA, Torgerson DJ. Systematic review of stepped wedge cluster randomized trials shows that design is particularly used to evaluate interventions during routine implementation. J Clin Epidemiol. 2011;64(9):936–48.CrossRefPubMed
8.
go back to reference Campbell D, Stanley J. Experimental and quasi-experimental designs for research. Chicago: Rand-McNally; 1963. Campbell D, Stanley J. Experimental and quasi-experimental designs for research. Chicago: Rand-McNally; 1963.
9.
go back to reference Abadie A. Semiparametric difference-in-differences estimators. Rev Econ Stud. 2005;72(1):1–19.CrossRef Abadie A. Semiparametric difference-in-differences estimators. Rev Econ Stud. 2005;72(1):1–19.CrossRef
10.
go back to reference Yoon J, Chow A, Rubenstein LV. Impact of medical home implementation through evidence-based quality improvement on utilization and costs. Med Care. 2016;54(2):118–25.CrossRefPubMed Yoon J, Chow A, Rubenstein LV. Impact of medical home implementation through evidence-based quality improvement on utilization and costs. Med Care. 2016;54(2):118–25.CrossRefPubMed
11.
go back to reference Ryan TP. Statistical methods for quality improvement. Hoboken: John Wiley & Sons; 2011. Ryan TP. Statistical methods for quality improvement. Hoboken: John Wiley & Sons; 2011.
12.
go back to reference Benneyan J, Lloyd R, Plsek P. Statistical process control as a tool for research and healthcare improvement. Qual Saf Health Care. 2003;12(6):458–64.CrossRefPubMedPubMedCentral Benneyan J, Lloyd R, Plsek P. Statistical process control as a tool for research and healthcare improvement. Qual Saf Health Care. 2003;12(6):458–64.CrossRefPubMedPubMedCentral
14.
go back to reference Hemming K, Lilford R, Girling AJ. Stepped-wedge cluster randomised controlled trials: a generic framework including parallel and multiple-level designs. Stat Med. 2015;34(2):181–96.CrossRefPubMed Hemming K, Lilford R, Girling AJ. Stepped-wedge cluster randomised controlled trials: a generic framework including parallel and multiple-level designs. Stat Med. 2015;34(2):181–96.CrossRefPubMed
15.
go back to reference Murray DM. Statistical models appropriate for designs often used in group randomized trials. Stat Med. 2001;20(9–10):1373–85.CrossRefPubMed Murray DM. Statistical models appropriate for designs often used in group randomized trials. Stat Med. 2001;20(9–10):1373–85.CrossRefPubMed
16.
go back to reference Feldman HA, McKinlay SM. Cohort versus cross-sectional design in large field trials: precision, sample size, and a unifying model. Stat Med. 1994;13(1):61–78.CrossRefPubMed Feldman HA, McKinlay SM. Cohort versus cross-sectional design in large field trials: precision, sample size, and a unifying model. Stat Med. 1994;13(1):61–78.CrossRefPubMed
17.
go back to reference Diehr P, Martin DC, Koepsell T, Cheadle A, Psaty BM, Wagner EH. Optimal survey design for community intervention evaluations: cohort or cross-sectional? J Clin Epidemiol. 1995;48(12):1461–72.CrossRefPubMed Diehr P, Martin DC, Koepsell T, Cheadle A, Psaty BM, Wagner EH. Optimal survey design for community intervention evaluations: cohort or cross-sectional? J Clin Epidemiol. 1995;48(12):1461–72.CrossRefPubMed
18.
go back to reference Diggle PJ, Heagerty P, Liang K-Y, Zeger SL. Analysis of longitudinal data (second Edition). Oxford: Oxford University Press. 2002. Diggle PJ, Heagerty P, Liang K-Y, Zeger SL. Analysis of longitudinal data (second Edition). Oxford: Oxford University Press. 2002.
19.
go back to reference O’Connell AA, editor. An illustration of multilevel models for ordinal response data. Data and context in statistics education: towards an evidence-based society, Proceedings of the Eighth International Conference on Teaching Statistics (ICOTS8). 2010. O’Connell AA, editor. An illustration of multilevel models for ordinal response data. Data and context in statistics education: towards an evidence-based society, Proceedings of the Eighth International Conference on Teaching Statistics (ICOTS8). 2010.
Metadata
Title
Application of a nonrandomized stepped wedge design to evaluate an evidence-based quality improvement intervention: a proof of concept using simulated data on patient-centered medical homes
Authors
Alexis K. Huynh
Martin L. Lee
Melissa M. Farmer
Lisa V. Rubenstein
Publication date
01-12-2016
Publisher
BioMed Central
Published in
BMC Medical Research Methodology / Issue 1/2016
Electronic ISSN: 1471-2288
DOI
https://doi.org/10.1186/s12874-016-0244-x

Other articles of this Issue 1/2016

BMC Medical Research Methodology 1/2016 Go to the issue