Skip to main content
Top
Published in: Prevention Science 2/2011

01-06-2011

Replication in Prevention Science

Authors: Jeffrey C. Valentine, Anthony Biglan, Robert F. Boruch, Felipe González Castro, Linda M. Collins, Brian R. Flay, Sheppard Kellam, Eve K. Mościcki, Steven P. Schinke

Published in: Prevention Science | Issue 2/2011

Login to get access

Abstract

Replication research is essential for the advancement of any scientific field. In this paper, we argue that prevention science will be better positioned to help improve public health if (a) more replications are conducted; (b) those replications are systematic, thoughtful, and conducted with full knowledge of the trials that have preceded them; and (c) state-of-the art techniques are used to summarize the body of evidence on the effects of the interventions. Under real-world demands it is often not feasible to wait for multiple replications to accumulate before making decisions about intervention adoption. To help individuals and agencies make better decisions about intervention utility, we outline strategies that can be used to help understand the likely direction, size, and range of intervention effects as suggested by the current knowledge base. We also suggest structural changes that could increase the amount and quality of replication research, such as the provision of incentives and a more vigorous pursuit of prospective research registers. Finally, we discuss methods for integrating replications into the roll-out of a program and suggest that strong partnerships with local decision makers are a key component of success in replication research. Our hope is that this paper can highlight the importance of replication and stimulate more discussion of the important elements of the replication process. We are confident that, armed with more and better replications and state-of-the-art review methods, prevention science will be in a better position to positively impact public health.
Literature
go back to reference Brown, C. H., Wyman, P. A., Brinales, J. M., & Gibbons, R. D. (2007). The role of randomized trials in testing interventions for the prevention of youth suicide. International Review of Psychiatry, 19, 617–631.CrossRefPubMed Brown, C. H., Wyman, P. A., Brinales, J. M., & Gibbons, R. D. (2007). The role of randomized trials in testing interventions for the prevention of youth suicide. International Review of Psychiatry, 19, 617–631.CrossRefPubMed
go back to reference Bushman, B. J., & Wang, M. C. (2009). Vote-counting procedures in meta-analysis. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 207–220). New York: Sage. Bushman, B. J., & Wang, M. C. (2009). Vote-counting procedures in meta-analysis. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 207–220). New York: Sage.
go back to reference Campbell, D. T. (1986). Relabeling internal and external validity for applied social scientists. In W. M. K. Trochim (Ed.), Advances in quasi-experimental design and analysis (pp. 67–77). San Francisco, CA: Jossey-Bass. Campbell, D. T. (1986). Relabeling internal and external validity for applied social scientists. In W. M. K. Trochim (Ed.), Advances in quasi-experimental design and analysis (pp. 67–77). San Francisco, CA: Jossey-Bass.
go back to reference Campbell, D. T., & Stanley, J. C. (1966). Experimental and quasi-experimental designs for research. Chicago, IL: McNally. Campbell, D. T., & Stanley, J. C. (1966). Experimental and quasi-experimental designs for research. Chicago, IL: McNally.
go back to reference Cochran, W. G., & Cox, G. M. (1957). Experimental designs (2nd ed.). New York: Wiley. Cochran, W. G., & Cox, G. M. (1957). Experimental designs (2nd ed.). New York: Wiley.
go back to reference Cohen, J. (1994). The earth is round (p < .05). American Psychologist, 49, 997–1003.CrossRef Cohen, J. (1994). The earth is round (p < .05). American Psychologist, 49, 997–1003.CrossRef
go back to reference Cooper, H., & Dorr, N. (1995). Race comparisons on need for achievement: A meta-analytic alternative to Graham’s narrative review. Review of Educational Research, 65, 483–508. Cooper, H., & Dorr, N. (1995). Race comparisons on need for achievement: A meta-analytic alternative to Graham’s narrative review. Review of Educational Research, 65, 483–508.
go back to reference Cooper, H. M., & Rosenthal, R. (1980). Statistical versus traditional procedures for summarizing research findings. Psychological Bulletin, 87, 442–449.CrossRefPubMed Cooper, H. M., & Rosenthal, R. (1980). Statistical versus traditional procedures for summarizing research findings. Psychological Bulletin, 87, 442–449.CrossRefPubMed
go back to reference Cumming, G., & Maillardet, R. (2006). Confidence intervals and replication: Where will the next mean fall? Psychological Methods, 11, 217–227.CrossRefPubMed Cumming, G., & Maillardet, R. (2006). Confidence intervals and replication: Where will the next mean fall? Psychological Methods, 11, 217–227.CrossRefPubMed
go back to reference Egger, M., Smith, G. D., & O'Rourke, K. (2001). Rationale, potentials, and promise of systematic reviews. In M. Egger, G. D. Smith, & K. O'Rourke (Eds.), Systematic reviews in health care: Systematic reviews in context (2nd ed., pp. 3–22). London, UK: BMJ.CrossRef Egger, M., Smith, G. D., & O'Rourke, K. (2001). Rationale, potentials, and promise of systematic reviews. In M. Egger, G. D. Smith, & K. O'Rourke (Eds.), Systematic reviews in health care: Systematic reviews in context (2nd ed., pp. 3–22). London, UK: BMJ.CrossRef
go back to reference Eisner, M. (2009). No effects in independent prevention trials: Can we reject the cynical view? Journal of Experimental Criminology, 5, 163–183.CrossRef Eisner, M. (2009). No effects in independent prevention trials: Can we reject the cynical view? Journal of Experimental Criminology, 5, 163–183.CrossRef
go back to reference Elliott, D. S., & Mihalic, S. (2004). Issues in disseminating and replicating effective prevention programs. Prevention Science, 5, 47–52.CrossRefPubMed Elliott, D. S., & Mihalic, S. (2004). Issues in disseminating and replicating effective prevention programs. Prevention Science, 5, 47–52.CrossRefPubMed
go back to reference Flay, B. R. (1986) Efficacy and effectiveness trials (and other phases of research) in the development of health promotion programs. Preventive Medicine, 15, 451–474. Flay, B. R. (1986) Efficacy and effectiveness trials (and other phases of research) in the development of health promotion programs. Preventive Medicine, 15, 451–474.
go back to reference Flay, B. R., Biglan, A., Boruch, R. F., González Castro, F., Gottfredson, D., Kellam, S., et al. (2005). Standards of evidence: Criteria for efficacy, effectiveness and dissemination. Prevention Science, 6, 151–175. Flay, B. R., Biglan, A., Boruch, R. F., González Castro, F., Gottfredson, D., Kellam, S., et al. (2005). Standards of evidence: Criteria for efficacy, effectiveness and dissemination. Prevention Science, 6, 151–175.
go back to reference Foster, E. M. (2010). The value of reanalysis and replication: Introduction to special section. Developmental Psychology, 46, 973–975.CrossRefPubMed Foster, E. M. (2010). The value of reanalysis and replication: Introduction to special section. Developmental Psychology, 46, 973–975.CrossRefPubMed
go back to reference Gigerenzer, G. (1993). The superego, the ego, and the id in statistical reasoning. In G. Keren & C. Lewis (Eds.), A handbook for data analysis in the behavioral sciences: Methodological issues (pp. 311–339). Hillsdale, NJ: Erlbaum. Gigerenzer, G. (1993). The superego, the ego, and the id in statistical reasoning. In G. Keren & C. Lewis (Eds.), A handbook for data analysis in the behavioral sciences: Methodological issues (pp. 311–339). Hillsdale, NJ: Erlbaum.
go back to reference Greenwald, A. G., Gonzalez, R., Harris, R. J., & Guthrie, D. (1996). Effect sizes and p values: What should be reported and what should by replicated? Psychophysiology, 33, 175–183.CrossRefPubMed Greenwald, A. G., Gonzalez, R., Harris, R. J., & Guthrie, D. (1996). Effect sizes and p values: What should be reported and what should by replicated? Psychophysiology, 33, 175–183.CrossRefPubMed
go back to reference Hedges, L. V. (1987). How hard is hard science, how soft is soft science? The empirical cumulativeness of research. American Psychologist, 42, 443–455.CrossRef Hedges, L. V. (1987). How hard is hard science, how soft is soft science? The empirical cumulativeness of research. American Psychologist, 42, 443–455.CrossRef
go back to reference Hedges, L. V., & Olkin, I. (1985). Statistical methods for meta-analysis. Orlando, FL: Academic. Hedges, L. V., & Olkin, I. (1985). Statistical methods for meta-analysis. Orlando, FL: Academic.
go back to reference Hedges, L. V., & Vevea, J. L. (1998). Fixed- and random-effects models in meta-analysis. Psychological Methods, 3, 486–504.CrossRef Hedges, L. V., & Vevea, J. L. (1998). Fixed- and random-effects models in meta-analysis. Psychological Methods, 3, 486–504.CrossRef
go back to reference Hopewell, S., Clarke, M., Moher, D., Wager, E., Middleton, P., Altman, D. G., et al. (2008). CONSORT for reporting randomised trials in journal and conference abstracts. Lancet, 371, 281–283.CrossRefPubMed Hopewell, S., Clarke, M., Moher, D., Wager, E., Middleton, P., Altman, D. G., et al. (2008). CONSORT for reporting randomised trials in journal and conference abstracts. Lancet, 371, 281–283.CrossRefPubMed
go back to reference Hunter, J. E. (2001). The desperate need for replications. Journal of Consumer Research, 28, 149–158.CrossRef Hunter, J. E. (2001). The desperate need for replications. Journal of Consumer Research, 28, 149–158.CrossRef
go back to reference Institute of Medicine. (2008). Knowing what works in healthcare: A roadmap for the nation. Washington, DC: The National Academies Press. Institute of Medicine. (2008). Knowing what works in healthcare: A roadmap for the nation. Washington, DC: The National Academies Press.
go back to reference Kellam, S. G., Koretz, D., & Moscicki, E. K. (1999). Core elements of developmental epidemiologically based prevention research. American Journal of Community Psychology, 27, 463–482.CrossRefPubMed Kellam, S. G., Koretz, D., & Moscicki, E. K. (1999). Core elements of developmental epidemiologically based prevention research. American Journal of Community Psychology, 27, 463–482.CrossRefPubMed
go back to reference Kirby, D. (2001). Emerging answers: Research findings on programs to reduce teen pregnancy. Washington, DC: National campaign to prevent teen pregnancy. Kirby, D. (2001). Emerging answers: Research findings on programs to reduce teen pregnancy. Washington, DC: National campaign to prevent teen pregnancy.
go back to reference Killeen, P. R. (2005). An alternative to null-hypothesis significance tests. Psychological Science, 16, 345–353.CrossRefPubMed Killeen, P. R. (2005). An alternative to null-hypothesis significance tests. Psychological Science, 16, 345–353.CrossRefPubMed
go back to reference Lo, B., Wolf, L. E., & Berkeley, A. (2000). Conflict-of-interest policies for investigators in clinical trials. The New England Journal of Medicine, 343, 1616–1620.CrossRefPubMed Lo, B., Wolf, L. E., & Berkeley, A. (2000). Conflict-of-interest policies for investigators in clinical trials. The New England Journal of Medicine, 343, 1616–1620.CrossRefPubMed
go back to reference Oakes, M. (1986). Statistical inference: A commentary for the social and behavioral sciences. New York: Wiley. Oakes, M. (1986). Statistical inference: A commentary for the social and behavioral sciences. New York: Wiley.
go back to reference Petrosino, A., & Soydan, H. (2005). The impact of program developers as evaluators on criminal recidivism: Results from meta-analyses of experimental and quasi-experimental research. Journal of experimental criminology, 1, 435–450. Petrosino, A., & Soydan, H. (2005). The impact of program developers as evaluators on criminal recidivism: Results from meta-analyses of experimental and quasi-experimental research. Journal of experimental criminology, 1, 435–450.
go back to reference Pfeffer, C., & Olsen, B. R. (2002). Editorial: Journal of negative results in biomedicine. Journal of Negative Results in Biomedicine, 1, 2.CrossRefPubMed Pfeffer, C., & Olsen, B. R. (2002). Editorial: Journal of negative results in biomedicine. Journal of Negative Results in Biomedicine, 1, 2.CrossRefPubMed
go back to reference Schmidt, S. (2009). Shall we really do it again? The powerful concept of replication is neglected in the social sciences. Review of General Psychology, 13, 90–100.CrossRef Schmidt, S. (2009). Shall we really do it again? The powerful concept of replication is neglected in the social sciences. Review of General Psychology, 13, 90–100.CrossRef
go back to reference Seaman, M. A., & Serlin, R. C. (1998). Equivalence confidence intervals for two-group comparisons of means. Psychological Methods, 3, 403–411.CrossRef Seaman, M. A., & Serlin, R. C. (1998). Equivalence confidence intervals for two-group comparisons of means. Psychological Methods, 3, 403–411.CrossRef
go back to reference Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton.
go back to reference Shadish, W. R., Fuller, S., Gorman, M. E., Amabile, T. M., Kruglanski, A. W., Rosenthal, R., et al. (1994). Social psychology of science: A conceptual and research program. In W. R. Shadish, S. Fuller, & M. E. Gorman (Eds.), The social psychology of science (pp. 3–123). New York: Guilford. Shadish, W. R., Fuller, S., Gorman, M. E., Amabile, T. M., Kruglanski, A. W., Rosenthal, R., et al. (1994). Social psychology of science: A conceptual and research program. In W. R. Shadish, S. Fuller, & M. E. Gorman (Eds.), The social psychology of science (pp. 3–123). New York: Guilford.
go back to reference Shadish, W. R., & Haddock, C. K. (1994). Combining estimates of effect size. In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 261–281). New York: Sage. Shadish, W. R., & Haddock, C. K. (1994). Combining estimates of effect size. In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 261–281). New York: Sage.
go back to reference Schulz, K. F., (1995). Subverting randomization in controlled trials. Journal of the American Medical Association, 274, 1456–1458. Schulz, K. F., (1995). Subverting randomization in controlled trials. Journal of the American Medical Association, 274, 1456–1458.
go back to reference Stice, E., Shaw, H., Becker, C., & Rohde, P. (2008). Dissonance-based interventions for the prevention of eating disorders: Using persuasion principles to promote health. Prevention Science, 9, 114–128. Stice, E., Shaw, H., Becker, C., & Rohde, P. (2008). Dissonance-based interventions for the prevention of eating disorders: Using persuasion principles to promote health. Prevention Science, 9, 114–128.
go back to reference Summerville, G. (2009). Laying a solid foundation: Strategies for effective program replication. New York: Public/Private Ventures. Summerville, G. (2009). Laying a solid foundation: Strategies for effective program replication. New York: Public/Private Ventures.
go back to reference Tobler, N. S., Roona, M. R., Ochshorn, P., Marshall, D. G., Streke, A. V., & Stackpole, K. M. (2000). School-based adolescent drug prevention programs: 1998 meta-analysis. The Journal of Primary Prevention, 20, 275–336. Tobler, N. S., Roona, M. R., Ochshorn, P., Marshall, D. G., Streke, A. V., & Stackpole, K. M. (2000). School-based adolescent drug prevention programs: 1998 meta-analysis. The Journal of Primary Prevention, 20, 275–336.
go back to reference Tolan, P., Keys, C., Chertok, F., & Jason, L. A. (1990). Researching community psychology: Issues of theory and methods. Washington, DC: American Psychological Association.CrossRef Tolan, P., Keys, C., Chertok, F., & Jason, L. A. (1990). Researching community psychology: Issues of theory and methods. Washington, DC: American Psychological Association.CrossRef
go back to reference Valentine, J. C., Pigott, T. D., & Rothstein, H. R. (2010). How many studies do you need? A primer on statistical power for meta-analysis. Journal of Educational and Behavioral Statistics, 35, 215–247.CrossRef Valentine, J. C., Pigott, T. D., & Rothstein, H. R. (2010). How many studies do you need? A primer on statistical power for meta-analysis. Journal of Educational and Behavioral Statistics, 35, 215–247.CrossRef
go back to reference Weiffen, B., Lehrer, D., Leschke, J., Lhachimi, S., & Vasiliu, A. (2007). Negative results in social science. European Political Science, 6, 51–68.CrossRef Weiffen, B., Lehrer, D., Leschke, J., Lhachimi, S., & Vasiliu, A. (2007). Negative results in social science. European Political Science, 6, 51–68.CrossRef
go back to reference Williamson, P. R., Gamble, C., Altman, D. G., & Hutton, J. L. (2005). Outcome selection bias in meta-analysis. Statistical Methods in Medical Research, 14, 515–524.CrossRefPubMed Williamson, P. R., Gamble, C., Altman, D. G., & Hutton, J. L. (2005). Outcome selection bias in meta-analysis. Statistical Methods in Medical Research, 14, 515–524.CrossRefPubMed
Metadata
Title
Replication in Prevention Science
Authors
Jeffrey C. Valentine
Anthony Biglan
Robert F. Boruch
Felipe González Castro
Linda M. Collins
Brian R. Flay
Sheppard Kellam
Eve K. Mościcki
Steven P. Schinke
Publication date
01-06-2011
Publisher
Springer US
Published in
Prevention Science / Issue 2/2011
Print ISSN: 1389-4986
Electronic ISSN: 1573-6695
DOI
https://doi.org/10.1007/s11121-011-0217-6

Other articles of this Issue 2/2011

Prevention Science 2/2011 Go to the issue