Research Methods Brief: Anatomy of Process Evaluations for P/CVE



Preventing Violent Extremism, PVE, Countering Violent Extremism, CVE, P/CVE, Design, Evaluation, Process Evaluation


Process Evaluations are evaluations focused on understanding how a program is implemented.  This also can include evaluating the extent to which a program is implemented according to plan (i.e., evaluating its “program fidelity”).  In short, process evaluations seek to identify a program’s “moving parts” to assess the extent to which they are functioning as intended.  Ideally, that includes uncovering the theoretical mechanisms—the reasons “why”—a program’s outputs or outcomes are (or are not) achieved.  Understanding why a program is (or is not) working as well as expected is the backbone of evidence-based P/CVE program design and evaluation, and is essential to informing sound P/CVE program management decision-making.  Consequently, without exception, good P/CVE-related research, or evaluation projects—those that are scientifically grounded—must include at least some element(s) of process evaluation.  This research methods brief describes the fundamental components of process evaluations, common pitfalls and means to avoid those pitfalls, within the context of P/CVE program design and evaluation.


Brewer, M. B., & Crano, W. D. (2000). Research design and issues of validity. In H. T. Reis & C. M. Judd (Eds.), Handbook of research methods in social and personality psychology (pp. 3–16). Cambridge University Press.

Colman, A. M. (2015a). Demand characteristics. In Dictionary of psychology. Oxford University Press.

Colman, A. M. (2015b). Experimenter expectancy effect. In Dictionary of psychology. Oxford University Press.

Ghoshal, S., & Moran, P. (1996). Bad for practice: A critique of the transaction cost theory. Academy of Management Review, 21(1), 13–47.

Hoewe, J. (2017). Manipulation check. The International Encyclopedia of Communication Research Methods, 1–5.

Koehler, D., & Fiebig, V. (2019). Knowing what to do: Academic and practitioner understanding of how to counter violent radicalization. Perspectives on Terrorism, 13(3), 44–62.

Koehler, D. (2017a). How and why we should take deradicalization seriously. In Nature Human Behaviour (Vol. 1, Issue 6).

Koehler, D. (2017b). Structural quality standards for work to intervene with and counter violent extremism: A handbook for practitioners, state coordination units and civil society programme implementers in Germany. Counter Extremism Network Coordination Unit (KPEBW).

Lewicki, R. J. (2006). Trust, trust development, and trust repair. In M. Deutsch, P. Coleman, & E. Marcus (Eds.), The handbook of conflict resolution: Theory and practice (2nd Ed). (2nd ed., pp. 92–119). Jossey-Bass.

Rosenthal, R., & Fode, K. L. (1963). The effect of experimenter bias on the performance of the albino rat. Behavioral Science, 8, 183–189.

Rosenthal, R., & Jacobson, L. (1966). Teachers’ expectancies: Determinants of pupils’ IQ gains. Psychological Reports, 19, 115–118.

Sagan, C. (2011). The demon-haunted world: Science as a candle in the dark. Ballantine Books.

Schumpe, B. M., Bélanger, J. J., Dugas, M., Erb, H. P., & Kruglanski, A. W. (2018). Counterfinality: On the increased perceived instrumentality of means to a goal. Frontiers in Psychology, 9(JUL), 1–15.

Steele, C. (2010). Whistling Vivaldi : how stereotypes affect us and what we can do. New York : W.W. Norton & Company.

Wholey, J. S., Hatry, H. P., & Newcomer, K. E. (2010). Handbook of practical program evaluation. Wiley.

Williams, K. D. (2009). Ostracism : A temporal need-threat model. Advances in Experimental Social Psychology, 41, 275–314.






Research Methods