Why an Active Control Group Makes a Difference and What to Do About It

Main Article Content

Lois-ellin Datta

Abstract

The Randomized Control Trials (RCT) design and its quasi-experimental kissing cousin, the Comparison Group Trials (CGT), are golden to some and not even silver to others. At the center of the affection, at the vortex of the discomfort, are beliefs about what it takes to establish causality. These designs are considered primarily when the purpose of the evaluation is establishing whether there are outcomes associated with a program and, if so, how confidently the results can be attributed to the program. If one concludes these designs are superior to alternatives for establishing causality, and have no more bad habits than the alternatives, then the RCT and the CGT are the methods of choice.

Downloads

Download data is not yet available.

Article Details

How to Cite
Datta, L.- ellin. (2007). Why an Active Control Group Makes a Difference and What to Do About It. Journal of MultiDisciplinary Evaluation, 4(7), 1–12. https://doi.org/10.56645/jmde.v4i7.5
Section
Research Articles

References

Administration for Children and Families. (2005). Head start impact study and follow-up. Washington, DC: Administration for Children and Families. Retrieved January 21, 2007 from http://www.acf.hhs.gov/programs/opre/h s/impact_study/reports/imptstdy_interim/ I

Angrist, J., Imbens, G. W., & Rubin, D. B. (1996). Identification of causal effects using instrumental variables. Journal of the American Statistical Association, 91, 444-472.

https://doi.org/10.1080/01621459.1996.10476902

Besharov, D. J. (2005). Head Start's broken promise. American Enterprise Institute for Public Policy Research. (October 25, 2005).

Bloom, H. S. (1984). Accounting for no-shows in experimental evaluation designs. Evaluation Review, 8, 225-246.

https://doi.org/10.1177/0193841X8400800205

Bogatz, G. A., & Ball, S. (1971). The second year of Sesame Street: A continuing evaluation. Princeton, NJ: Educational Testing Service. (See also Sesame Street Summary. ERIC, 1972, ASIN: B0006W2LS6).

Cook, T. D. (2006). Describing what is special about the role of experiments in contemporary educational research: Putting the "gold standard" rhetoric into perspective. Journal of MultiDisciplinary Evaluation, 6, 1- 10.

https://doi.org/10.56645/jmde.v3i6.36

Cook, T. D., & Payne, M. R. (2002). Objecting to the objections to using random assignment in educational research. In F. Mosteller & R. F. Boruch (Eds.), Evidence matters: Randomized trials in education research. Washington, DC: Brookings Institution.

Department of Education. (2005). Scientifically based evaluation methods. Federal Register, 70(15), 3586-3589.

Donaldson, S., & Christie, C. (2005). The 2004 Claremont debate: Lipsey vs. Scriven. Journal of MultiDisciplinary Evaluation, 3, 60-66.

https://doi.org/10.56645/jmde.v2i3.101

Gottfredson, D. C., et al. (2007) The Baltimore city drug treatment court: 3-year self-report outcome study. Evaluation Review, 29, 42-64.

https://doi.org/10.1177/0193841X04269908

Lipsey, M.W., & Cordray, D. C. (2000). Evaluation methods for social intervention. Annual Review of Psychology, 51, 345-375.

https://doi.org/10.1146/annurev.psych.51.1.345

Ludwig, J., & Phillips, D. A. (2007). The benefits and costs of Head Start (Working Paper 12973, available at www.nber.org/papers/w12973). Cambridge, MA: National Bureau of Economic Research.

https://doi.org/10.3386/w12973

Miller, L. B., Dyer, J. L., Stevenson, H., & White, S. H. (1975). Four preschool programs: Their dimensions and effects. Monographs of the Society for Research on Child Development.

https://doi.org/10.2307/1165878

Millsap, M. A. et al., (2000). Evaluation of the Detroit's Comer Schools and Families Initiative. Cambridge, MA: Abt Associates.

Mosteller, F., & Boruch, R.F. (Eds.). (2002). Evidence matters: Randomized trials in educational research. Washington, D.C.: Brookings Institution.

Orwin, R., Cordray, D., & Huebner, R. N. (1994). Judicious application of randomized designs. In K. J. Conrad (Ed.), New directions for evaluation: Critically evaluating the role of experiments (vol. 63, pp. 73-86).

https://doi.org/10.1002/ev.1686

Parker, R. N., Asencio, E. K., & Plechner, D. (2006). How much of a good thing is too much? Explaining the failure of a well-designed, well-executed intervention in juvenile hall for 'hard-to-place" delinquents. In R. N. Parker & C. Hudley (Eds.), New directions for evaluation: Pitfalls and pratfalls: Null and negative findings in evaluating interventions (vol. 110, pp. 45-57).

https://doi.org/10.1002/ev.186

Schweinhart, L. J., & Weikart, D. P. (1993). A summary of significant benefits: The High/Scope Perry Preschool study through age 27. London, England: Hodder and Stoughton.

Scriven, M. (2007). The logic of causal investigations. In Press.

Shadish, W., Cook, T., & Campbell, D. (2002). Experimental and quasi-experimental designs for generalized causal inference.

Shadish, W. (2000). Evaluation theory is who we are (1998 Presidential Address). American Journal of Evaluation.

https://doi.org/10.1016/S1098-2140(99)80177-5

Society for Research in Child Development. (2005). Placing the first-year findings of the national Head Start impact study in context. Office for Policy and Communications.

Stallings, J., Almy, M., Resnick, L. B., & Leinhardt, G. (1975). Implementation and child effects of teaching practices in follow through classrooms. Monographs of the society for research in child development.

https://doi.org/10.2307/1165828

U.S. General Accountability Office. (2003). Education and care: Head Start key among array of early childhood programs but national research on effectiveness not completed. July 22, 2003, GAO- 03-840T.