Program Evaluation Standards for Utility Facilitate Stakeholder Internalization of Evaluative Thinking in the West Virginia Clinical Translational Science Institute

Main Article Content

Reagan Curtis
Abhik Roy
Nikki Lewis
Evana Nusrat Dooty
Taylor Mikalik

Abstract

Background: The Program Evaluation Standards (PES) can be considered established criteria for high quality evaluations. We emphasize PES Utility Standards and evaluation capacity building as we strive for meaningful application of our work in the real world.


Purpose: We focused our methodology on understanding how stakeholders discussed utility and how their perceptions related to our evaluation work aligned with PES Utility Standards.


Setting: The West Virginia Clinical Translational Science Institute (WVCTSI) is a statewide multi-institutional entity for which we conduct tracking and evaluation since 2012.


Intervention: Sustained collaborative engagement of evaluation stakeholders with the goal of increasing their utilization of evaluation products and evaluative thinking.


Research Design: Case study.


Data Collection and Analysis: We interviewed five key stakeholders. Themes developed from analysis of PES Utility standard coding of interview data informed document analysis. Interview and document analysis were used to develop themes and illustrative examples, as well as to develop and describe a five-level Evaluation Uptake Scale.


Findings: We describe shifts in initiation, use, and internalization of evaluative thinking by non-evaluation personnel that prompted development and application of an Evaluation Uptake Scale to capture increased evaluation capacity among stakeholders over time. We discuss how focus on PES Utility and evaluation capacity building facilitated such shifts and their implications for maximizing utility of evaluation activity in large complex programmatic evaluations.


Keywords: Program evaluation standards, evaluation utility, evaluation capacity building.

Downloads

Download data is not yet available.

Article Details

How to Cite
Curtis, R., Roy, A., Lewis, N., Dooty, E. N., & Mikalik, T. (2023). Program Evaluation Standards for Utility Facilitate Stakeholder Internalization of Evaluative Thinking in the West Virginia Clinical Translational Science Institute. Journal of MultiDisciplinary Evaluation, 19(43), 49–65. https://doi.org/10.56645/jmde.v19i43.831
Section
Evaluation Standards Scholarship
Author Biographies

Reagan Curtis, West Virginia University

Reagan Curtis is professor of educational psychology in the Department of Learning Sciences & Human Development and director of the Program Evaluation & Research Center of the College of Education & Human Services at West Virginia University.

Abhik Roy, West Virginia University

Abhik Roy is assistant professor in the School of Education.

Nikki Lewis, West Virginia University

Nikki Lewis is Partnerships and Evaluation Manager in the West Virginia Clinical Translational Science Institute.

Evana Nusrat Dooty, West Virginia University

Evana Nusrat Dooty is a doctoral student in the School of Education.

Taylor Mikalik, West Virginia University

Taylor Mikalik is a doctoral student in the School of Education.

References

Alkin, M. C. (1982). Introduction: Parameters of evaluation utilization/use. Studies in Educational Evaluation, 8(2), 153–155. https://doi.org/10.1016/0191-491X(82)90006-2 DOI: https://doi.org/10.1016/0191-491X(82)90006-2

Alkin, M. C., & Coyle, K. (1988). Thoughts on evaluation utilization, misutilization and non-utilization. Studies in Educational Evaluation, 14(3), 331–340. https://doi.org/10.1016/0191-491X(88)90027-2 DOI: https://doi.org/10.1016/0191-491X(88)90027-2

Alkin, M. C., & King, J. A. (2016). The historical development of evaluation use. American Journal of Evaluation, 37(4), 568–579. https://doi.org/10.1177/1098214016665164 DOI: https://doi.org/10.1177/1098214016665164

Alkin, M. C., & King, J. A. (2017). Definitions of evaluation use and misuse, evaluation influence, and factors affecting use. American Journal of Evaluation, 38(3), 434–450. https://doi.org/10.1177/1098214017717015 DOI: https://doi.org/10.1177/1098214017717015

American Evaluation Association. (2018). AEA Evaluator Competencies. https://www.eval.org/About/Competencies-Standards/AEA-Evaluator-Competencies

Bezzi, C. (2006). Evaluation pragmatics. Evaluation, 12(1), 56–76. https://doi.org/10.1177/1356389006064189 DOI: https://doi.org/10.1177/1356389006064189

Brandon, P. R., & Fukunaga, L. L. (2014). The state of the empirical research literature on stakeholder involvement in program evaluation. American Journal of Evaluation, 35(1), 26–44. https://doi.org/10.1177/1098214013503699 DOI: https://doi.org/10.1177/1098214013503699

Braskamp, L. A. (1982). A definition of use. Studies in Educational Evaluation, 8(2), 169–174. https://doi.org/10.1016/0191-491X(82)90009-8 DOI: https://doi.org/10.1016/0191-491X(82)90009-8

Braskamp, L. A., Brown, R. D., & Newman, D. L. (1978). The credibility of a local educational program evaluation report: Author source and client audience characteristics. American Educational Research Journal, 15(3), 441–450. JSTOR. https://doi.org/10.2307/1162497 DOI: https://doi.org/10.3102/00028312015003441

Braskamp, L. A., Brown, R. D., & Newman, D. L. (1982). Studying evaluation utilization through simulations. Evaluation Review, 6(1), 114–126. https://doi.org/10.1177/0193841X8200600108 DOI: https://doi.org/10.1177/0193841X8200600108

Bryson, J. M., & Patton, M. Q. (2015). Analyzing and engaging stakeholders. In K. E. Newcomer, H. P. Hatry, & J. S. Wholey (Eds.), Handbook of practical program evaluation (Fourth edition). Jossey-Bass & Pfeiffer Imprints, Wiley. DOI: https://doi.org/10.1002/9781119171386.ch2

Bryson, J. M., Patton, M. Q., & Bowman, R. A. (2011). Working with evaluation stakeholders: A rationale, step-wise approach and toolkit. Evaluation and Program Planning, 34(1), 1–12. https://doi.org/10.1016/j.evalprogplan.2010.07.001 DOI: https://doi.org/10.1016/j.evalprogplan.2010.07.001

Bundi, P., Frey, K., & Widmer, T. (2021). Does evaluation quality enhance evaluation use? Evidence & Policy: A Journal of Research, Debate and Practice, 17(4), 661–687. https://doi.org/10.1332/174426421X16141794148067 DOI: https://doi.org/10.1332/174426421X16141794148067

Bundi, P., & Trein, P. (2022). Evaluation use and learning in public policy. Policy Sciences, 55(2), 283–309. https://doi.org/10.1007/s11077-022-09462-6 DOI: https://doi.org/10.1007/s11077-022-09462-6

Campbell, D. T. (1971). Reforms as experiments. Urban Affairs Quarterly, 7(2), 133–171. https://doi.org/10.1177/107808747100700202 DOI: https://doi.org/10.1177/107808747100700202

Carter, R. K. (1971). Clients’ resistance to negative findings and the latent conservative function of evaluation studies. The American Sociologist, 6(2), 118–124. JSTOR.

Chen, H. (2005). Practical program evaluation: Assessing and improving planning, implementation, and effectiveness. Sage.

Chouinard, J. A. (2013). The case for participatory evaluation in an era of accountability. American Journal of Evaluation, 34(2), 237–253. https://doi.org/10.1177/1098214013478142 DOI: https://doi.org/10.1177/1098214013478142

Chouinard, J. A., & Cousins, J. B. (2012). Participatory evaluation up close: A review and integration of research-based knowledge. Charlotte, NC: Information Age Publishing.

Christie, C. A., & Alkin, M. C. (1999). Further reflections on evaluation misutilization. Studies in Educational Evaluation, 25(1), 1–10. https://doi.org/10.1016/S0191-491X(99)00006-1 DOI: https://doi.org/10.1016/S0191-491X(99)00006-1

Connolly, T., & Porter, A. L. (1980). A user-focused model for the utilization of evaluation. Evaluation and Program Planning, 3(2), 131–140. https://doi.org/10.1016/0149-7189(80)90061-0 DOI: https://doi.org/10.1016/0149-7189(80)90061-0

Cooksy, L. J., & Mark, M. M. (2012). Influences on evaluation quality. American Journal of Evaluation, 33(1), 79–84. https://doi.org/10.1177/1098214011426470 DOI: https://doi.org/10.1177/1098214011426470

Coryn, C. L. S., Noakes, L. A., Westine, C. D., & Schröter, D. C. (2011). A systematic review of theory-driven evaluation practice from 1990 to 2009. American Journal of Evaluation, 32(2), 199–226. https://doi.org/10.1177/1098214010389321 DOI: https://doi.org/10.1177/1098214010389321

Cousins, J. B. (2004). Commentary: Minimizing evaluation misuse as principled practice. American Journal of Evaluation, 25(3), 391–397. https://doi.org/10.1177/109821400402500311 DOI: https://doi.org/10.1177/109821400402500311

Cousins, J. B., Goh, S. C., Elliott, C. J., & Bourgeois, I. (2014). Framing the capacity to do and use evaluation. New Directions for Evaluation, 2014(141), 7–23. https://doi.org/10.1002/ev.20076 DOI: https://doi.org/10.1002/ev.20076

Cousins, J. B., Goh, S., Clark, S., & Lee, L. (2004). Integrating evaluative inquiry into the organizational culture: A review and synthesis of the knowledge base. The Canadian Journal of Program Evaluation, 19(2), 99–141. DOI: https://doi.org/10.3138/cjpe.19.006

Cousins, J. B., & Leithwood, K. A. (1986). Current empirical research on evaluation utilization. Review of Educational Research, 56(3), 331–364. https://doi.org/10.3102/00346543056003331 DOI: https://doi.org/10.3102/00346543056003331

Cullen, A. E., & Coryn, C. L. S. (2011). Forms and functions of participatory evaluation in international development: A review of the empirical and theoretical literature. Journal of MultiDisciplinary Evaluation, 7(16), 32–47. DOI: https://doi.org/10.56645/jmde.v7i16.288

Daillak, R. H. (1982). What is evaluation utilization? Studies in Educational Evaluation, 8(2), 157–162. https://doi.org/10.1016/0191-491X(82)90007-4 DOI: https://doi.org/10.1016/0191-491X(82)90007-4

Davidson, E. (2022). Evaluation methodology basics: The nuts and bolts of sound evaluation. Sage. https://doi.org/10.4135/9781452230115 DOI: https://doi.org/10.4135/9781452230115

Donnelly, C., Letts, L., Klinger, D., & Shulha, L. (2014). Supporting knowledge translation through evaluation: Evaluator as knowledge broker. Canadian Journal of Program Evaluation / La Revue Canadienne d’évaluation de Programme, 29(1). https://doi.org/10.3138/cjpe.29.1.36 DOI: https://doi.org/10.3138/cjpe.29.1.36

Durning, S. J., Hemmer, P., & Pangaro, L. N. (2007). The structure of program evaluation: An approach for evaluating a course, clerkship, or components of a residency or fellowship training program. Teaching and Learning in Medicine, 19(3), 308–318. https://doi.org/10.1080/10401330701366796 DOI: https://doi.org/10.1080/10401330701366796

Fashola, J. B. (1989). Evaluation, feasibility and relevance. English for Specific Purposes, 8(1), 65–73. https://doi.org/10.1016/0889-4906(89)90007-0 DOI: https://doi.org/10.1016/0889-4906(89)90007-0

Fleischer, D. N., & Christie, C. A. (2009). Evaluation use: Results from a survey of U.S. American Evaluation Association members. American Journal of Evaluation, 30(2), 158–175. https://doi.org/10.1177/1098214008331009 DOI: https://doi.org/10.1177/1098214008331009

Forss, K., Rebien, C. C., & Carlsson, J. (2002). Process use of evaluations: Types of use that precede lessons learned and feedback. Evaluation, 8(1), 29–45. https://doi.org/10.1177/1358902002008001515 DOI: https://doi.org/10.1177/1358902002008001515

Ginsburg, A., & Rhett, N. (2003). Building a better body of evidence: New opportunities to strengthen evaluation utilization. American Journal of Evaluation, 24(4), 489–498. https://doi.org/10.1177/109821400302400406 DOI: https://doi.org/10.1177/109821400302400406

Gowda, D., Curran, T., Khedagi, A., Mangold, M., Jiwani, F., Desai, U., Charon, R., & Balmer, D. (2019). Implementing an interprofessional narrative medicine program in academic clinics: Feasibility and program evaluation. Perspectives on Medical Education, 8(1), 52–59. https://doi.org/10.1007/s40037-019-0497-2 DOI: https://doi.org/10.1007/S40037-019-0497-2

Grack Nelson, A., & Schreiber, R. C. (2009). Participatory evaluation: A case study of involving stakeholders in the evaluation process. Visitor Studies, 12(2), 199–213. https://doi.org/10.1080/10645570903203521 DOI: https://doi.org/10.1080/10645570903203521

Grasso, P. G. (2003). What makes an evaluation useful? Reflections from experience in large organizations. American Journal of Evaluation, 24(4), 507–514. https://doi.org/10.1177/109821400302400408 DOI: https://doi.org/10.1016/j.ameval.2003.10.006

Greene, J. C. (2005). Stakeholders. In S. Mathison (Ed.), Encyclopedia of evaluation. Sage.

Heilman, J. G. (1983). Beyond the technical and bureaucratic theories of utilization: Some thoughts on synthesizing reviews and the knowledge base of the evaluation profession. Evaluation Review, 7(6), 707–728. https://doi.org/10.1177/0193841X8300700601 DOI: https://doi.org/10.1177/0193841X8300700601

Henry, G. T., & Mark, M. M. (2003). Beyond use: Understanding evaluation’s influence on attitudes and actions. American Journal of Evaluation, 24(3), 293–314. https://doi.org/10.1177/109821400302400302 DOI: https://doi.org/10.1177/109821400302400302

Hirsch, M. L., & Quartaroli, T. A. (2009). Many hats: The methods and roles of the program evaluator. Journal of Applied Social Science, 3(2), 73–80. https://doi.org/10.1177/193672440900300207 DOI: https://doi.org/10.1177/193672440900300207

Hopson, R., & Horsford, S. (2015, September 7). WE Week: Rodney Hopson and Sonya Horsford on “But can you do it” questions of evaluator credibility and organizational capacity: The nuances of evaluator credibility [AEA365]. https://aea365.org/blog/we-week-rodney-hopson-and-sonya-horsford-on-but-can-you-do-it-questions-of-evaluator-credibility-and-organizational-capacity-rodney-hopson-and-sonya-horsford-on-the-nuances-of-eval/

Jacobson, M. R., & Azzam, T. (2018). The effects of stakeholder involvement on perceptions of an evaluation’s credibility. Evaluation and Program Planning, 68, 64–73. https://doi.org/10.1016/j.evalprogplan.2018.02.006 DOI: https://doi.org/10.1016/j.evalprogplan.2018.02.006

Jephson, M. B. (1992). The purposes, importance, and feasibility of program evaluation in community-based early intervention programs. Journal of Early Intervention, 16(3), 252–261. https://doi.org/10.1177/105381519201600305 DOI: https://doi.org/10.1177/105381519201600305

Johnson, K., Greenseid, L. O., Toal, S. A., King, J. A., Lawrenz, F., & Volkov, B. (2009). Research on evaluation use: A review of the empirical literature from 1986 to 2005. American Journal of Evaluation, 30(3), 377–410. https://doi.org/10.1177/1098214009341660 DOI: https://doi.org/10.1177/1098214009341660

King, J. A. (1988). Research on evaluation use and its implications for evaluation research and practice. Studies in Educational Evaluation, 14(3), 285–299. https://doi.org/10.1016/0191-491X(88)90025-9 DOI: https://doi.org/10.1016/0191-491X(88)90025-9

King, J. A., & Stevahn, L. (2013). Interactive evaluation practice: Mastering the interpersonal dynamics of program evaluation. SAGE. DOI: https://doi.org/10.4135/9781452269979

Kirkhart, K. E. (2000). Reconceptualizing evaluation use: An integrated theory of influence. New Directions for Evaluation, 2000(88), 5–23. https://doi.org/10.1002/ev.1188

Kirkhart, K. E., Caracelli, V. J., & Preskill, H. (2000). The expanding scope of evaluation use. In Reconceptualizing evaluation use: An integrated theory of influence (Issue 88, pp. 5–23). Jossey-Bass San Francisco. DOI: https://doi.org/10.1002/ev.1188

Lawrenz, F., Gullickson, A., & Toal, S. (2007). Dissemination: Handmaiden to evaluation use. American Journal of Evaluation, 28(3), 275–289. https://doi.org/10.1177/1098214007304131 DOI: https://doi.org/10.1177/1098214007304131

Leviton, L. C., & Hughes, E. F. X. (1981). Research on the utilization of evaluations: A review and synthesis. Evaluation Review, 5(4), 525–548. https://doi.org/10.1177/0193841X8100500405 DOI: https://doi.org/10.1177/0193841X8100500405

Luukkonen-Gronow, T. (1989). The impact of evaluation data on policy determination. In D. Evered & S. Harnett (Eds.), The evaluation of scientific research. John Wiley and Sons.

Mathison, S. (1991). Role conflicts for internal evaluators. Evaluation and Program Planning, 14(3), 173–179. https://doi.org/10.1016/0149-7189(91)90053-J DOI: https://doi.org/10.1016/0149-7189(91)90053-J

Morabito, S. M. (2002). Evaluator roles and strategies for expanding evaluation process influence. American Journal of Evaluation, 23(3), 321–330. https://doi.org/10.1177/109821400202300307 DOI: https://doi.org/10.1016/S1098-2140(02)00200-X

Odera, E. L. (2021). Capturing the added value of participatory evaluation. American Journal of Evaluation, 42(2), 201–220. https://doi.org/10.1177/1098214020910265 DOI: https://doi.org/10.1177/1098214020910265

Olejniczak, K. (2017). The game of knowledge brokering: A new method for increasing evaluation use. American Journal of Evaluation, 38(4), 554–576. https://doi.org/10.1177/1098214017716326 DOI: https://doi.org/10.1177/1098214017716326

Olejniczak, K., Raimondo, E., & Kupiec, T. (2016). Evaluation units as knowledge brokers: Testing and calibrating an innovative framework. Evaluation, 22(2), 168–189. https://doi.org/10.1177/1356389016638752 DOI: https://doi.org/10.1177/1356389016638752

Patton, M. Q. (1998). Discovering process use. Evaluation, 4(2), 225–233. https://doi.org/10.1177/13563899822208437 DOI: https://doi.org/10.1177/13563899822208437

Patton, M. Q. (2005). Misuse of evaluations. In S. Mathison (Ed.), Encyclopedia of evaluation. Sage.

Patton, M. Q. (2012). Essentials of utilization-focused evaluation. SAGE.

Pattyn, V., & Bouterse, M. (2020). Explaining use and non-use of policy evaluations in a mature evaluation setting. Humanities and Social Sciences Communications, 7(1), 85. https://doi.org/10.1057/s41599-020-00575-y DOI: https://doi.org/10.1057/s41599-020-00575-y

Peck, L. R., & Gorzalski, L. M. (2009). An evaluation use framework and empirical assessment. Journal of MultiDisciplinary Evaluation, 6(12), 139–156. DOI: https://doi.org/10.56645/jmde.v6i12.228

Peterman, K., & Gathings, M. J. (2019). Using a community-created multisite evaluation to promote evaluation use across a sector. Evaluation and Program Planning, 74, 54–60. https://doi.org/10.1016/j.evalprogplan.2019.02.014 DOI: https://doi.org/10.1016/j.evalprogplan.2019.02.014

Picciotto, R. (2011). The logic of evaluation professionalism. Evaluation, 17(2), 165–180. https://doi.org/10.1177/1356389011403362 DOI: https://doi.org/10.1177/1356389011403362

Rallis, S. F., & Rossman, G. B. (2000). Dialogue for learning: Evaluator as critical friend. New Directions for Evaluation, 2000(86), 81–92. https://doi.org/10.1002/ev.1174 DOI: https://doi.org/10.1002/ev.1174

Riecken, H. W., & Boruch, R. F. (1974). Social experimentation: A method for planning and evaluating social intervention. Academic Press.

Ripley, W. K. (1985). Medium of presentation: Does it make a difference in the reception of evaluation information? Educational Evaluation and Policy Analysis, 7(4), 417–425. https://doi.org/10.3102/01623737007004417 DOI: https://doi.org/10.3102/01623737007004417

Rodriguez-Campos, L. (2011). Stakeholder involvement in evaluation: Three decades of the American Journal of Evaluation. Journal of MultiDisciplinary Evaluation, 8(17), 57–79. DOI: https://doi.org/10.56645/jmde.v8i17.335

Rogers, P. (2018a, January 25). 7 Strategies to improve evaluation use and influence—Part 1. https://www.betterevaluation.org/en/blog/strategies_for_improving_evaluation_use_and_influence

Rogers, P. (2018b, January 25). 7 Strategies to improve evaluation use and influence—Part 2. https://www.betterevaluation.org/blog/strategies_for_improving_evaluation_use_and_influence_part_2

Ruhe, V., & Boudreau, J. D. (2013). The 2011 Program Evaluation Standards: A framework for quality in medical education programme evaluations. Journal of Evaluation in Clinical Practice, 19 5, 925–932. DOI: https://doi.org/10.1111/j.1365-2753.2012.01879.x

Russ-Eft, D. F., Bober, M. J., de la Teja, I., Foxon, M., & A. Koszalka, T. (2008). Evaluator competencies: Standards for the practice of evaluation in organizations (1st ed). Jossey-Bass.

Rutman, L. (1982). Dimensions of utilization and types of evaluation approaches. Studies in Educational Evaluation, 8(2), 163–168. https://doi.org/10.1016/0191-491X(82)90008-6 DOI: https://doi.org/10.1016/0191-491X(82)90008-6

Sanders, J. R. (1994). The program evaluation standards: How to assess evaluations of educational programs. Sage.

Schwandt, T. A. (2001). Responsiveness and everyday life. New Directions for Evaluation, 2001(92), 73–88. https://doi.org/10.1002/ev.36 DOI: https://doi.org/10.1002/ev.36

Scriven, M. (1991). Evaluation thesaurus (4th ed). Sage.

Shulha, L. M., & Cousins, J. B. (1997). Evaluation use: Theory, research, and practice since 1986. Evaluation Practice, 18(3), 195–208. https://doi.org/10.1177/109821409701800302 DOI: https://doi.org/10.1016/S0886-1633(97)90027-1

Stake, R. E. (2000). Program evaluation, particularly responsive evaluation. In D. L. Stufflebeam, G. F. Madaus, & T. Kellaghan (Eds.), Evaluation models: Viewpoints on educational and human services evaluation (pp. 343–362). Springer Netherlands. https://doi.org/10.1007/0-306-47559-6_18 DOI: https://doi.org/10.1007/0-306-47559-6_18

Stevahn, L., Berger, D. E., Tucker, S. A., & Rodell, A. (2020). Using the 2018 AEA Evaluator Competencies for effective program evaluation practice. New Directions for Evaluation, 2020(168), 75–97. https://doi.org/10.1002/ev.20434 DOI: https://doi.org/10.1002/ev.20434

Stevahn, L., King, J. A., Ghere, G., & Minnema, J. (2005). Establishing essential competencies for program evaluators. American Journal of Evaluation, 26(1), 43–59. https://doi.org/10.1177/1098214004273180 DOI: https://doi.org/10.1177/1098214004273180

Stufflebeam, D. L. (2003). Professional standards and principles for evaluations. In T. Kellaghan & D. L. Stufflebeam (Eds.), International handbook of educational evaluation (pp. 279–302). Springer Netherlands. https://doi.org/10.1007/978-94-010-0309-4_18 DOI: https://doi.org/10.1007/978-94-010-0309-4_18

Taut, S. M., & Alkin, M. C. (2003). Program staff perceptions of barriers to evaluation implementation. American Journal of Evaluation, 24(2), 213–226. https://doi.org/10.1177/109821400302400205 DOI: https://doi.org/10.1177/109821400302400205

Turnbull, B. (1999). The mediating effect of participation efficacy on evaluation use. Evaluation and Program Planning, 22(2), 131–140. https://doi.org/10.1016/S0149-7189(99)00012-9 DOI: https://doi.org/10.1016/S0149-7189(99)00012-9

Vedung, E. (2000). Public policy and program evaluation (1st ed.). Routledge.

Vo, A. (2015). Foreward. In C. Christie & A. Vo (Eds.), Evaluation use and decision-making in society: A tribute to Marvin C. Alkin (pp. vii–xvii). Information Age Publishing.

Wanzer, D. L. (2021). What is evaluation? Perspectives of how evaluation differs (or not) from research. American Journal of Evaluation, 42(1), 28–46. https://doi.org/10.1177/1098214020920710 DOI: https://doi.org/10.1177/1098214020920710

Weiss, C. H. (1972). Evaluation research: Methods for assessing program effectiveness. Prentice-Hall.

Westbrook, T. R., Avellar, S. A., & Seftor, N. (2017). Reviewing the reviews: Examining similarities and differences between federally funded evidence reviews. Evaluation Review, 41(3), 183–211. https://doi.org/10.1177/0193841X16666463 DOI: https://doi.org/10.1177/0193841X16666463

White, J., National Academies of Sciences, Engineering, and Medicine (U.S.), National Academies of Sciences, Engineering, and Medicine (U.S.), & National Academies of Sciences, Engineering, and Medicine (U.S.) (Eds.). (2017). Principles and practices for federal program evaluation: Proceedings of a workshop. The National Academies Press. DOI: https://doi.org/10.17226/24831

Wholey, J. S. (1994). Handbook of practical evaluation. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Assessing the feasibility and likely usefulness of evaluation (pp. 15–39). Jossey-Bass Publisher.

Wingate, L. A. (2009). The Program Evaluation Standards applied for metaevaluation purposes: Investigating interrater reliability and implications for use.

Wolf, R. M. (1990). A framework for evaluation. In H. J. Walberg & G. D. Haertel (Eds.), The international encyclopedia of educational evaluation (1st ed). Pergamon Press.

Yarbrough, D. B. (Ed.). (2011). The program evaluation standards: A guide for evaluators and evaluation users (3rd ed). SAGE Publications.

Zorzi, R., Perrin, B., Mcguire, M., Long, B., & Lee, L. (2002). Defining the benefits, outputs, and knowledge elements of program evaluation. The Canadian Journal of Program Evaluation, 17(3), 143–150. DOI: https://doi.org/10.3138/cjpe.0017.008