Main Article Content
Background: Randomized controlled trials (RCTs) are frequently not an option in evaluation practice, which is why evaluators switch to non-experimental methods–such as the “counterfactual as self-estimated by program participants” (CSEPP) for estimating intervention effects. Unfortunately, no systematic attempt has been made to test under what conditions CSEPP provides valid estimates.
Purpose: As a first step in this direction, this research compared the performance of CSEPP in terms of bias when applied in different groups of participants with different levels of education, when used for assessing the effects on different outcome variables, and when employed with different question orders within the questionnaire.
Intervention: The treatment used in this research was a short educational video, in which the audience is educated about important concepts and aspects of organ donation.
Research Design: Since investigating bias in CSEPP is difficult at participant level, a series of 40 studies was conducted and bias was analyzed at study-level. For each study, the effect of the same treatment was estimated by CSEPP and compared with the effect estimated by a simultaneously conducted RCT. Afterwards, it was analyzed whether differences between CSEPP and RCT across the studies were determined by variation in the conditions under which the studies were conducted. Despite small sample sizes of the single trials, the meta-analysis was sufficiently powered to detect even small differences between CSEPP and RCT.
Data Collection and Analysis: The data was collected via online surveys on a crowdsourcing portal. For data analysis, we applied meta-analytic methods such as random-effects meta-analysis and meta-regression.
Findings: Results show that CSEPP provided accurate effect estimates, no matter under what conditions the method was applied.
Copyright 2016 Journal of MultiDisciplinary Evaluation, Western Michigan University.