Skip to main content
Top
Published in: Prevention Science 8/2021

01-11-2021

Common Methodological Problems in Randomized Controlled Trials of Preventive Interventions

Authors: Christine M. Steeger, Pamela R. Buckley, Fred C. Pampel, Charleen J. Gust, Karl G. Hill

Published in: Prevention Science | Issue 8/2021

Login to get access

Abstract

Randomized controlled trials (RCTs) are often considered the gold standard in evaluating whether intervention results are in line with causal claims of beneficial effects. However, given that poor design and incorrect analysis may lead to biased outcomes, simply employing an RCT is not enough to say an intervention “works.” This paper applies a subset of the Society for Prevention Research (SPR) Standards of Evidence for Efficacy, Effectiveness, and Scale-up Research, with a focus on internal validity (making causal inferences) to determine the degree to which RCTs of preventive interventions are well-designed and analyzed, and whether authors provide a clear description of the methods used to report their study findings. We conducted a descriptive analysis of 851 RCTs published from 2010 to 2020 and reviewed by the Blueprints for Healthy Youth Development web-based registry of scientifically proven and scalable interventions. We used Blueprints’ evaluation criteria that correspond to a subset of SPR’s standards of evidence. Only 22% of the sample satisfied important criteria for minimizing biases that threaten internal validity. Overall, we identified an average of 1–2 methodological weaknesses per RCT. The most frequent sources of bias were problems related to baseline non-equivalence (i.e., differences between conditions at randomization) or differential attrition (i.e., differences between completers versus attritors or differences between study conditions that may compromise the randomization). Additionally, over half the sample (51%) had missing or incomplete tests to rule out these potential sources of bias. Most preventive intervention RCTs need improvement in rigor to permit causal inference claims that an intervention is effective. Researchers also must improve reporting of methods and results to fully assess methodological quality. These advancements will increase the usefulness of preventive interventions by ensuring the credibility and usability of RCT findings.
Appendix
Available only for authorised users
Literature
go back to reference Altman, D. G. (1985). Comparability of randomised groups. Statistician, 34, 125–136.CrossRef Altman, D. G. (1985). Comparability of randomised groups. Statistician, 34, 125–136.CrossRef
go back to reference Altman, D. G., & Dore, C. J. (1990). Randomisation and baseline comparisons in clinical trials. The Lancet, 335(8682), 149–153.CrossRef Altman, D. G., & Dore, C. J. (1990). Randomisation and baseline comparisons in clinical trials. The Lancet, 335(8682), 149–153.CrossRef
go back to reference Bastian, H., Glasziou, P., & Chalmers, I. (2010). Seventy-five trials and eleven systematic reviews a day: How will we ever keep up? PLoS Med, 7(9), e1000326. Bastian, H., Glasziou, P., & Chalmers, I. (2010). Seventy-five trials and eleven systematic reviews a day: How will we ever keep up? PLoS Med, 7(9), e1000326.
go back to reference Bickman, L., & Reich, S. M. (2015). Randomized controlled trials: A gold standard or gold plated. Credible and Actionable Evidence: The Foundation for Rigorous and Influential Evaluations, Sage, Los Angeles, 83–113. Bickman, L., & Reich, S. M. (2015). Randomized controlled trials: A gold standard or gold plated. Credible and Actionable Evidence: The Foundation for Rigorous and Influential Evaluations, Sage, Los Angeles, 83–113.
go back to reference Brincks, A., Montag, S., Howe, G. W., Huang, S., Siddique, J., Ahn, S., & Brown, C. H. (2018). Addressing methodologic challenges and minimizing threats to validity in synthesizing findings from individual-level data across longitudinal randomized trials. Prevention Science, 19(1), 60–73.PubMedPubMedCentralCrossRef Brincks, A., Montag, S., Howe, G. W., Huang, S., Siddique, J., Ahn, S., & Brown, C. H. (2018). Addressing methodologic challenges and minimizing threats to validity in synthesizing findings from individual-level data across longitudinal randomized trials. Prevention Science, 19(1), 60–73.PubMedPubMedCentralCrossRef
go back to reference Bonell, C. (2002). The utility of randomized controlled trials of social interventions: An examination of two trials of HIV prevention. Critical Public Health, 12(4), 321–334.CrossRef Bonell, C. (2002). The utility of randomized controlled trials of social interventions: An examination of two trials of HIV prevention. Critical Public Health, 12(4), 321–334.CrossRef
go back to reference Buckley, P. R., Ebersole, C. R., Steeger, C. M., Michaelson, L. E., Hill, K. G., & Gardner, F. (2021). The role of clearinghouses in promoting transparent research: A methodological study of transparency practices for preventive interventions. Prevention Science. [online first]. https://doi.org/10.1007/s11121-021-01252-5 Buckley, P. R., Ebersole, C. R., Steeger, C. M., Michaelson, L. E., Hill, K. G., & Gardner, F. (2021). The role of clearinghouses in promoting transparent research: A methodological study of transparency practices for preventive interventions. Prevention Science. [online first]. https://​doi.​org/​10.​1007/​s11121-021-01252-5
go back to reference Buckley, P. R., Fagan, A. A., Pampel, F. C., & Hill, K. G. (2020). Making evidence-based interventions relevant for users: A comparison of requirements for dissemination readiness across program registries. Evaluation Review, 44(1), 51–83.PubMedPubMedCentralCrossRef Buckley, P. R., Fagan, A. A., Pampel, F. C., & Hill, K. G. (2020). Making evidence-based interventions relevant for users: A comparison of requirements for dissemination readiness across program registries. Evaluation Review, 44(1), 51–83.PubMedPubMedCentralCrossRef
go back to reference Burkhardt, J. T., Schröter, D. C., Magura, S., Means, S. N., & Coryn, C. L. (2015). An overview of evidence-based program registers (EBPRs) for behavioral health. Evaluation and Program Planning, 48, 92–99.PubMedPubMedCentralCrossRef Burkhardt, J. T., Schröter, D. C., Magura, S., Means, S. N., & Coryn, C. L. (2015). An overview of evidence-based program registers (EBPRs) for behavioral health. Evaluation and Program Planning, 48, 92–99.PubMedPubMedCentralCrossRef
go back to reference Chilenski, S. M., Pasch, K. E., Knapp, A., Baker, E., Boyd, R. C., Cioffi, C., & Rulison, K. (2020). The Society for Prevention Research 20 years later: A summary of training needs. Prevention Science, 21(7), 985–1000.PubMedPubMedCentralCrossRef Chilenski, S. M., Pasch, K. E., Knapp, A., Baker, E., Boyd, R. C., Cioffi, C., & Rulison, K. (2020). The Society for Prevention Research 20 years later: A summary of training needs. Prevention Science, 21(7), 985–1000.PubMedPubMedCentralCrossRef
go back to reference Cook, T. D. (2018). Twenty-six assumptions that have to be met if single random assignment experiments are to warrant" gold standard" status: A commentary on Deaton and Cartwright. Social Science & Medicine, 210, 37–40.CrossRef Cook, T. D. (2018). Twenty-six assumptions that have to be met if single random assignment experiments are to warrant" gold standard" status: A commentary on Deaton and Cartwright. Social Science & Medicine, 210, 37–40.CrossRef
go back to reference Cook, T. D., & Campbell, D. T. (1979). The design and conduct of true experiments and quasi-experiments in field settings. In Reproduced in part in Research in Organizations: Issues and Controversies. Goodyear Publishing Company. Cook, T. D., & Campbell, D. T. (1979). The design and conduct of true experiments and quasi-experiments in field settings. In Reproduced in part in Research in Organizations: Issues and Controversies. Goodyear Publishing Company.
go back to reference Deaton, A., & Cartwright, N. (2018). Understanding and misunderstanding randomized controlled trials. Social Science & Medicine, 210, 2–21.CrossRef Deaton, A., & Cartwright, N. (2018). Understanding and misunderstanding randomized controlled trials. Social Science & Medicine, 210, 2–21.CrossRef
go back to reference Dechartres, A., Trinquart, L., Faber, T., & Ravaud, P. (2016). Empirical evaluation of which trial characteristics are associated with treatment effect estimates. Journal of Clinical Epidemiology, 77, 24–37.PubMedCrossRef Dechartres, A., Trinquart, L., Faber, T., & Ravaud, P. (2016). Empirical evaluation of which trial characteristics are associated with treatment effect estimates. Journal of Clinical Epidemiology, 77, 24–37.PubMedCrossRef
go back to reference Deke, J., & Chiang, H. (2017). The WWC attrition standard: Sensitivity to assumptions and opportunities for refining and adapting to new contexts. Evaluation Review, 41(2), 130–154.PubMedCrossRef Deke, J., & Chiang, H. (2017). The WWC attrition standard: Sensitivity to assumptions and opportunities for refining and adapting to new contexts. Evaluation Review, 41(2), 130–154.PubMedCrossRef
go back to reference Fagan, A. A., & Buchanan, M. (2016). What works in crime prevention? Comparison and critical review of three crime prevention registries. Criminology & Public Policy, 15(3), 617–649.CrossRef Fagan, A. A., & Buchanan, M. (2016). What works in crime prevention? Comparison and critical review of three crime prevention registries. Criminology & Public Policy, 15(3), 617–649.CrossRef
go back to reference Falagas, M. E., Grigori, T., & Ioannidou, E. (2009). A systematic review of trends in the methodological quality of randomized controlled trials in various research fields. Journal of Clinical Epidemiology, 62(3), 227–231. e229. Falagas, M. E., Grigori, T., & Ioannidou, E. (2009). A systematic review of trends in the methodological quality of randomized controlled trials in various research fields. Journal of Clinical Epidemiology, 62(3), 227–231. e229.
go back to reference Farrington, D. P., & Petrosino, A. (2001). The Campbell collaboration crime and justice group. The Annals of the American Academy of Political and Social Science, 578(1), 35–49.CrossRef Farrington, D. P., & Petrosino, A. (2001). The Campbell collaboration crime and justice group. The Annals of the American Academy of Political and Social Science, 578(1), 35–49.CrossRef
go back to reference Flay, B. R., Biglan, A., Boruch, R. F., Castro, F. G., Gottfredson, D., Kellam, S., & Ji, P. (2005). Standards of evidence: Criteria for efficacy, effectiveness and dissemination. Prevention Science, 6(3), 151–175.PubMedCrossRef Flay, B. R., Biglan, A., Boruch, R. F., Castro, F. G., Gottfredson, D., Kellam, S., & Ji, P. (2005). Standards of evidence: Criteria for efficacy, effectiveness and dissemination. Prevention Science, 6(3), 151–175.PubMedCrossRef
go back to reference Gottfredson, D. C., Cook, T. D., Gardner, F. E., Gorman-Smith, D., Howe, G. W., Sandler, I. N., & Zafft, K. M. (2015). Standards of evidence for efficacy, effectiveness, and scale-up research in prevention science: Next generation. Prevention Science, 16(7), 893–926.PubMedPubMedCentralCrossRef Gottfredson, D. C., Cook, T. D., Gardner, F. E., Gorman-Smith, D., Howe, G. W., Sandler, I. N., & Zafft, K. M. (2015). Standards of evidence for efficacy, effectiveness, and scale-up research in prevention science: Next generation. Prevention Science, 16(7), 893–926.PubMedPubMedCentralCrossRef
go back to reference Graham, J. W. (2009). Missing data analysis: Making it work in the real world. Annual Review of Psychology, 60, 549–576.PubMedCrossRef Graham, J. W. (2009). Missing data analysis: Making it work in the real world. Annual Review of Psychology, 60, 549–576.PubMedCrossRef
go back to reference Grant, S., Mayo-Wilson, E., Montgomery, P., Macdonald, G., Michie, S., Hopewell, S., & Moher, D. (2018). CONSORT-SPI 2018 Explanation and elaboration: Guidance for reporting social and psychological intervention trials. Trials, 19(1), 406.PubMedPubMedCentralCrossRef Grant, S., Mayo-Wilson, E., Montgomery, P., Macdonald, G., Michie, S., Hopewell, S., & Moher, D. (2018). CONSORT-SPI 2018 Explanation and elaboration: Guidance for reporting social and psychological intervention trials. Trials, 19(1), 406.PubMedPubMedCentralCrossRef
go back to reference Grant, S., Montgomery, P., Hopewell, S., Macdonald, G., Moher, D., & Mayo-Wilson, E. (2013a). Developing a reporting guideline for social and psychological intervention trials. Research on Social Work Practice, 23(6), 595–602.PubMedPubMedCentralCrossRef Grant, S., Montgomery, P., Hopewell, S., Macdonald, G., Moher, D., & Mayo-Wilson, E. (2013a). Developing a reporting guideline for social and psychological intervention trials. Research on Social Work Practice, 23(6), 595–602.PubMedPubMedCentralCrossRef
go back to reference Grant, S. P., Mayo-Wilson, E., Melendez-Torres, G., & Montgomery, P. (2013b). Reporting quality of social and psychological intervention trials: A systematic review of reporting guidelines and trial publications. PLoS One, 8(5), e65442. Grant, S. P., Mayo-Wilson, E., Melendez-Torres, G., & Montgomery, P. (2013b). Reporting quality of social and psychological intervention trials: A systematic review of reporting guidelines and trial publications. PLoS One, 8(5), e65442.
go back to reference Hedges, L. V., & Hedberg, E. C. (2007). Intraclass correlation values for planning group-randomized trials in education. Educational Evaluation and Policy Analysis, 29(1), 60–87.CrossRef Hedges, L. V., & Hedberg, E. C. (2007). Intraclass correlation values for planning group-randomized trials in education. Educational Evaluation and Policy Analysis, 29(1), 60–87.CrossRef
go back to reference Henry, D., Tolan, P., Gorman-Smith, D., & Schoeny, M. (2017). Alternatives to randomized control trial designs for community-based prevention evaluation. Prevention Science, 18(6), 671–680.PubMedCrossRef Henry, D., Tolan, P., Gorman-Smith, D., & Schoeny, M. (2017). Alternatives to randomized control trial designs for community-based prevention evaluation. Prevention Science, 18(6), 671–680.PubMedCrossRef
go back to reference Higgins, J. P., Altman, D. G., Gøtzsche, P. C., Jüni, P., Moher, D., Oxman, A. D., & Sterne, J. A. (2011). The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ, 343, d5928. Higgins, J. P., Altman, D. G., Gøtzsche, P. C., Jüni, P., Moher, D., Oxman, A. D., & Sterne, J. A. (2011). The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ, 343, d5928.
go back to reference Hopewell, S., Dutton, S., Yu, L. M., Chan, A. W., & Altman, D. G. (2010). The quality of reports of randomised trials in 2000 and 2006: Comparative study of articles indexed in PubMed. BMJ, 340, c723. Hopewell, S., Dutton, S., Yu, L. M., Chan, A. W., & Altman, D. G. (2010). The quality of reports of randomised trials in 2000 and 2006: Comparative study of articles indexed in PubMed. BMJ, 340, c723.
go back to reference Ioannidis, J. P. (2018). Randomized controlled trials: Often flawed, mostly useless, clearly indispensable: A commentary on Deaton and Cartwright. Social Science & Medicine (1982), 210, 53. Ioannidis, J. P. (2018). Randomized controlled trials: Often flawed, mostly useless, clearly indispensable: A commentary on Deaton and Cartwright. Social Science & Medicine (1982), 210, 53.
go back to reference Jeličić, H., Phelps, E., & Lerner, R. M. (2009). Use of missing data methods in longitudinal studies: The persistence of bad practices in developmental psychology. Developmental Psychology, 45(4), 1195.PubMedCrossRef Jeličić, H., Phelps, E., & Lerner, R. M. (2009). Use of missing data methods in longitudinal studies: The persistence of bad practices in developmental psychology. Developmental Psychology, 45(4), 1195.PubMedCrossRef
go back to reference Kristman, V. L., Manno, M., & Côté, P. (2005). Methods to account for attrition in longitudinal data: Do they work? A simulation study. European Journal of Epidemiology, 20(8), 657–662.PubMedCrossRef Kristman, V. L., Manno, M., & Côté, P. (2005). Methods to account for attrition in longitudinal data: Do they work? A simulation study. European Journal of Epidemiology, 20(8), 657–662.PubMedCrossRef
go back to reference Lachin, J. M. (2000). Statistical considerations in the intent-to-treat principle. Controlled Clinical Trials, 21(3), 167–189.PubMedCrossRef Lachin, J. M. (2000). Statistical considerations in the intent-to-treat principle. Controlled Clinical Trials, 21(3), 167–189.PubMedCrossRef
go back to reference Little, R. J., & Rubin, D. B. (2019). Statistical analysis with missing data (Vol. 793). John Wiley & Sons. Little, R. J., & Rubin, D. B. (2019). Statistical analysis with missing data (Vol. 793). John Wiley & Sons.
go back to reference Mayo-Wilson, E., Grant, S., Hopewell, S., Macdonald, G., Moher, D., & Montgomery, P. (2013). Developing a reporting guideline for social and psychological intervention trials. Trials, 14(1), 242.PubMedPubMedCentralCrossRef Mayo-Wilson, E., Grant, S., Hopewell, S., Macdonald, G., Moher, D., & Montgomery, P. (2013). Developing a reporting guideline for social and psychological intervention trials. Trials, 14(1), 242.PubMedPubMedCentralCrossRef
go back to reference Means, S. N., Magura, S., Burkhardt, J. T., Schröter, D. C., & Coryn, C. L. (2015). Comparing rating paradigms for evidence-based program registers in behavioral health: Evidentiary criteria and implications for assessing programs. Evaluation and Program Planning, 48, 100–116.PubMedPubMedCentralCrossRef Means, S. N., Magura, S., Burkhardt, J. T., Schröter, D. C., & Coryn, C. L. (2015). Comparing rating paradigms for evidence-based program registers in behavioral health: Evidentiary criteria and implications for assessing programs. Evaluation and Program Planning, 48, 100–116.PubMedPubMedCentralCrossRef
go back to reference Mihalic, S. F., & Elliott, D. S. (2015). Evidence-based programs registry: Blueprints for healthy youth development. Evaluation and Program Planning, 48, 124–131.PubMedCrossRef Mihalic, S. F., & Elliott, D. S. (2015). Evidence-based programs registry: Blueprints for healthy youth development. Evaluation and Program Planning, 48, 124–131.PubMedCrossRef
go back to reference Montgomery, P., Grant, S., Mayo-Wilson, E., Macdonald, G., Michie, S., Hopewell, S., & Moher, D. (2018). Reporting randomised trials of social and psychological interventions: The CONSORT-SPI 2018 Extension. Trials, 19(1), 407.PubMedPubMedCentralCrossRef Montgomery, P., Grant, S., Mayo-Wilson, E., Macdonald, G., Michie, S., Hopewell, S., & Moher, D. (2018). Reporting randomised trials of social and psychological interventions: The CONSORT-SPI 2018 Extension. Trials, 19(1), 407.PubMedPubMedCentralCrossRef
go back to reference Murray, D. M., Pals, S. L., George, S. M., Kuzmichev, A., Lai, G. Y., Lee, J. A., & Nelson, S. M. (2018). Design and analysis of group-randomized trials in cancer: A review of current practices. Preventive Medicine, 111, 241–247.PubMedPubMedCentralCrossRef Murray, D. M., Pals, S. L., George, S. M., Kuzmichev, A., Lai, G. Y., Lee, J. A., & Nelson, S. M. (2018). Design and analysis of group-randomized trials in cancer: A review of current practices. Preventive Medicine, 111, 241–247.PubMedPubMedCentralCrossRef
go back to reference Murray, D. M., Taljaard, M., Turner, E. L., & George, S. M. (2020). Essential ingredients and innovations in the design and analysis of group-randomized trials. Annual Review of Public Health, 41, 1–19.PubMedCrossRef Murray, D. M., Taljaard, M., Turner, E. L., & George, S. M. (2020). Essential ingredients and innovations in the design and analysis of group-randomized trials. Annual Review of Public Health, 41, 1–19.PubMedCrossRef
go back to reference Murray, D. M., Varnell, S. P., & Blitstein, J. L. (2004). Design and analysis of group-randomized trials: A review of recent methodological developments. American Journal of Public Health, 94(3), 423–432.PubMedPubMedCentralCrossRef Murray, D. M., Varnell, S. P., & Blitstein, J. L. (2004). Design and analysis of group-randomized trials: A review of recent methodological developments. American Journal of Public Health, 94(3), 423–432.PubMedPubMedCentralCrossRef
go back to reference Nicholson, J. S., Deboeck, P. R., & Howard, W. (2017). Attrition in developmental psychology: A review of modern missing data reporting and practices. International Journal of Behavioral Development, 41(1), 143–153.CrossRef Nicholson, J. S., Deboeck, P. R., & Howard, W. (2017). Attrition in developmental psychology: A review of modern missing data reporting and practices. International Journal of Behavioral Development, 41(1), 143–153.CrossRef
go back to reference Nosek, B. A., Ebersole, C. R., DeHaven, A. C., & Mellor, D. T. (2018). The preregistration revolution. Proceedings of the National Academy of Sciences, 115(11), 2600–2606. Nosek, B. A., Ebersole, C. R., DeHaven, A. C., & Mellor, D. T. (2018). The preregistration revolution. Proceedings of the National Academy of Sciences, 115(11), 2600–2606.
go back to reference Pigott, T. D., & Polanin, J. R. (2020). Methodological guidance paper: High-quality meta-analysis in a systematic review. Review of Educational Research, 90(1), 24–46.CrossRef Pigott, T. D., & Polanin, J. R. (2020). Methodological guidance paper: High-quality meta-analysis in a systematic review. Review of Educational Research, 90(1), 24–46.CrossRef
go back to reference Pocock, S. J., Assmann, S. E., Enos, L. E., & Kasten, L. E. (2002). Subgroup analysis, covariate adjustment and baseline comparisons in clinical trial reporting: Current practice and problems. Statistics in Medicine, 21(19), 2917–2930. Pocock, S. J., Assmann, S. E., Enos, L. E., & Kasten, L. E. (2002). Subgroup analysis, covariate adjustment and baseline comparisons in clinical trial reporting: Current practice and problems. Statistics in Medicine, 21(19), 2917–2930.
go back to reference Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879.CrossRefPubMed Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: A critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879.CrossRefPubMed
go back to reference Puma, M. J., Olsen, R. B., Bell, S. H., & Price, C. (2009). What to do when data are missing in group randomized controlled trials. NCEE 2009–0049. National Center for Education Evaluation and Regional Assistance. Puma, M. J., Olsen, R. B., Bell, S. H., & Price, C. (2009). What to do when data are missing in group randomized controlled trials. NCEE 2009–0049. National Center for Education Evaluation and Regional Assistance.
go back to reference Raab, G. M., Day, S., & Sales, J. (2000). How to select covariates to include in the analysis of a clinical trial. Controlled Clinical Trials, 21(4), 330–342.PubMedCrossRef Raab, G. M., Day, S., & Sales, J. (2000). How to select covariates to include in the analysis of a clinical trial. Controlled Clinical Trials, 21(4), 330–342.PubMedCrossRef
go back to reference Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods, (1): sage. Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods, (1): sage.
go back to reference Raudenbush, S. W., & Schwartz, D. (2020). Randomized experiments in education, with implications for multilevel causal inference. Annual Review of Statistics and Its Application, 7, 177–208.CrossRef Raudenbush, S. W., & Schwartz, D. (2020). Randomized experiments in education, with implications for multilevel causal inference. Annual Review of Statistics and Its Application, 7, 177–208.CrossRef
go back to reference Schafer, J. L., & Graham, J. W. (2002). Missing data: Our view of the state of the art. Psychological Methods, 7(2), 147.PubMedCrossRef Schafer, J. L., & Graham, J. W. (2002). Missing data: Our view of the state of the art. Psychological Methods, 7(2), 147.PubMedCrossRef
go back to reference Schulz, K. F., Altman, D. G., Moher, D., & Group, C. (2010). CONSORT 2010 statement: Updated guidelines for reporting parallel group randomised trials. Trials, 11(1), 32.CrossRef Schulz, K. F., Altman, D. G., Moher, D., & Group, C. (2010). CONSORT 2010 statement: Updated guidelines for reporting parallel group randomised trials. Trials, 11(1), 32.CrossRef
go back to reference Shadish, W. R., & Cook, T. D. (2009). The renaissance of field experimentation in evaluating interventions. Annual Review of Psychology, 60, 607–629.PubMedCrossRef Shadish, W. R., & Cook, T. D. (2009). The renaissance of field experimentation in evaluating interventions. Annual Review of Psychology, 60, 607–629.PubMedCrossRef
go back to reference Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Houghton Mifflin.
go back to reference Senn, S. (1994). Testing for baseline balance in clinical trials. Statistics in Medicine, 13(17), 1715–1726.PubMedCrossRef Senn, S. (1994). Testing for baseline balance in clinical trials. Statistics in Medicine, 13(17), 1715–1726.PubMedCrossRef
go back to reference Song, M., & Herman, R. (2010). Critical issues and common pitfalls in designing and conducting impact studies in education: Lessons learned from the What Works Clearinghouse (Phase I). Educational Evaluation and Policy Analysis, 32(3), 351–371.CrossRef Song, M., & Herman, R. (2010). Critical issues and common pitfalls in designing and conducting impact studies in education: Lessons learned from the What Works Clearinghouse (Phase I). Educational Evaluation and Policy Analysis, 32(3), 351–371.CrossRef
go back to reference Spieth, P. M., Kubasch, A. S., Penzlin, A. I., Illigens, B.M.-W., Barlinn, K., & Siepmann, T. (2016). Randomized controlled trials—A matter of design. Neuropsychiatric Disease and Treatment, 12, 1341.PubMedPubMedCentral Spieth, P. M., Kubasch, A. S., Penzlin, A. I., Illigens, B.M.-W., Barlinn, K., & Siepmann, T. (2016). Randomized controlled trials—A matter of design. Neuropsychiatric Disease and Treatment, 12, 1341.PubMedPubMedCentral
go back to reference Sterne, J. A., Savović, J., Page, M. J., Elbers, R. G., Blencowe, N. S., Boutron, I., & Higgins, J. P. (2019). RoB 2: A revised tool for assessing risk of bias in randomised trials. BMJ, 366. Sterne, J. A., Savović, J., Page, M. J., Elbers, R. G., Blencowe, N. S., Boutron, I., & Higgins, J. P. (2019). RoB 2: A revised tool for assessing risk of bias in randomised trials. BMJ, 366.
go back to reference Thomson, D., Hartling, L., Cohen, E., Vandermeer, B., Tjosvold, L., & Klassen, T. P. (2010). Controlled trials in children: Quantity, methodological quality and descriptive characteristics of pediatric controlled trials published 1948–2006. PLoS One, 5(9), e13106. Thomson, D., Hartling, L., Cohen, E., Vandermeer, B., Tjosvold, L., & Klassen, T. P. (2010). Controlled trials in children: Quantity, methodological quality and descriptive characteristics of pediatric controlled trials published 1948–2006. PLoS One, 5(9), e13106.
go back to reference Torgerson, D. J., & Torgerson, C. J. (2003). Avoiding bias in randomised controlled trials in educational research. British Journal of Educational Studies, 51(1), 36–45.CrossRef Torgerson, D. J., & Torgerson, C. J. (2003). Avoiding bias in randomised controlled trials in educational research. British Journal of Educational Studies, 51(1), 36–45.CrossRef
go back to reference Wadhwa, M., & Cook, T. D. (2019). The set of assumptions randomized control trials make and their implications for the role of such experiments in evidence-based child and adolescent development research. New Directions for Child and Adolescent Development, 2019(167), 17–37.PubMedCrossRef Wadhwa, M., & Cook, T. D. (2019). The set of assumptions randomized control trials make and their implications for the role of such experiments in evidence-based child and adolescent development research. New Directions for Child and Adolescent Development, 2019(167), 17–37.PubMedCrossRef
go back to reference Walleser, S., Hill, S. R., & Bero, L. A. (2011). Characteristics and quality of reporting of cluster randomized trials in children: Reporting needs improvement. Journal of Clinical Epidemiology, 64(12), 1331–1340.PubMedCrossRef Walleser, S., Hill, S. R., & Bero, L. A. (2011). Characteristics and quality of reporting of cluster randomized trials in children: Reporting needs improvement. Journal of Clinical Epidemiology, 64(12), 1331–1340.PubMedCrossRef
go back to reference West, S. G. (2009). Alternatives to randomized experiments. Current Directions in Psychological Science, 18(5), 299–304.CrossRef West, S. G. (2009). Alternatives to randomized experiments. Current Directions in Psychological Science, 18(5), 299–304.CrossRef
go back to reference West, S. G., & Thoemmes, F. (2010). Campbell’s and Rubin’s perspectives on causal inference. Psychological Methods, 15(1), 18.PubMedCrossRef West, S. G., & Thoemmes, F. (2010). Campbell’s and Rubin’s perspectives on causal inference. Psychological Methods, 15(1), 18.PubMedCrossRef
go back to reference What Works Clearinghouse (WWC) (2020). WWC procedures and standards handbook (Version 4.1). Washington, DC: US Department of Education, Institute of Education Sciences. National Center for Education Evaluation and Regional Assistance, What Works Clearinghouse. What Works Clearinghouse (WWC) (2020). WWC procedures and standards handbook (Version 4.1). Washington, DC: US Department of Education, Institute of Education Sciences. National Center for Education Evaluation and Regional Assistance, What Works Clearinghouse.
go back to reference Wilson, D. B. (2009). Missing a critical piece of the pie: Simple document search strategies inadequate for systematic reviews. Journal of Experimental Criminology, 5(4), 429–440.CrossRef Wilson, D. B. (2009). Missing a critical piece of the pie: Simple document search strategies inadequate for systematic reviews. Journal of Experimental Criminology, 5(4), 429–440.CrossRef
Metadata
Title
Common Methodological Problems in Randomized Controlled Trials of Preventive Interventions
Authors
Christine M. Steeger
Pamela R. Buckley
Fred C. Pampel
Charleen J. Gust
Karl G. Hill
Publication date
01-11-2021
Publisher
Springer US
Published in
Prevention Science / Issue 8/2021
Print ISSN: 1389-4986
Electronic ISSN: 1573-6695
DOI
https://doi.org/10.1007/s11121-021-01263-2

Other articles of this Issue 8/2021

Prevention Science 8/2021 Go to the issue