Program Evaluation Standards for Utility Facilitate Stakeholder Internalization of Evaluative Thinking in the West Virginia Clinical Translational Science Institute

Main Article Content

Reagan Curtis
Abhik Roy
Nikki Lewis
Evana Nusrat Dooty
Taylor Mikalik


Background: The Program Evaluation Standards (PES) can be considered established criteria for high quality evaluations. We emphasize PES Utility Standards and evaluation capacity building as we strive for meaningful application of our work in the real world.

Purpose: We focused our methodology on understanding how stakeholders discussed utility and how their perceptions related to our evaluation work aligned with PES Utility Standards.

Setting: The West Virginia Clinical Translational Science Institute (WVCTSI) is a statewide multi-institutional entity for which we conduct tracking and evaluation since 2012.

Intervention: Sustained collaborative engagement of evaluation stakeholders with the goal of increasing their utilization of evaluation products and evaluative thinking.

Research Design: Case study.

Data Collection and Analysis: We interviewed five key stakeholders. Themes developed from analysis of PES Utility standard coding of interview data informed document analysis. Interview and document analysis were used to develop themes and illustrative examples, as well as to develop and describe a five-level Evaluation Uptake Scale.

Findings: We describe shifts in initiation, use, and internalization of evaluative thinking by non-evaluation personnel that prompted development and application of an Evaluation Uptake Scale to capture increased evaluation capacity among stakeholders over time. We discuss how focus on PES Utility and evaluation capacity building facilitated such shifts and their implications for maximizing utility of evaluation activity in large complex programmatic evaluations.

Keywords: Program evaluation standards, evaluation utility, evaluation capacity building.


Download data is not yet available.

Article Details

How to Cite
Curtis, R., Roy, A., Lewis, N., Dooty, E. N., & Mikalik, T. (2023). Program Evaluation Standards for Utility Facilitate Stakeholder Internalization of Evaluative Thinking in the West Virginia Clinical Translational Science Institute. Journal of MultiDisciplinary Evaluation, 19(43), 49–65.
Evaluation Standards Scholarship
Author Biographies

Reagan Curtis, West Virginia University

Reagan Curtis is professor of educational psychology in the Department of Learning Sciences & Human Development and director of the Program Evaluation & Research Center of the College of Education & Human Services at West Virginia University.

Abhik Roy, West Virginia University

Abhik Roy is assistant professor in the School of Education.

Nikki Lewis, West Virginia University

Nikki Lewis is Partnerships and Evaluation Manager in the West Virginia Clinical Translational Science Institute.

Evana Nusrat Dooty, West Virginia University

Evana Nusrat Dooty is a doctoral student in the School of Education.

Taylor Mikalik, West Virginia University

Taylor Mikalik is a doctoral student in the School of Education.


Alkin, M. C. (1982). Introduction: Parameters of evaluation utilization/use. Studies in Educational Evaluation, 8(2), 153–155.

Alkin, M. C., & Coyle, K. (1988). Thoughts on evaluation utilization, misutilization and non-utilization. Studies in Educational Evaluation, 14(3), 331–340.

Alkin, M. C., & King, J. A. (2016). The historical development of evaluation use. American Journal of Evaluation, 37(4), 568–579.

Alkin, M. C., & King, J. A. (2017). Definitions of evaluation use and misuse, evaluation influence, and factors affecting use. American Journal of Evaluation, 38(3), 434–450.

American Evaluation Association. (2018). AEA Evaluator Competencies.

Bezzi, C. (2006). Evaluation pragmatics. Evaluation, 12(1), 56–76.

Brandon, P. R., & Fukunaga, L. L. (2014). The state of the empirical research literature on stakeholder involvement in program evaluation. American Journal of Evaluation, 35(1), 26–44.

Braskamp, L. A. (1982). A definition of use. Studies in Educational Evaluation, 8(2), 169–174.

Braskamp, L. A., Brown, R. D., & Newman, D. L. (1978). The credibility of a local educational program evaluation report: Author source and client audience characteristics. American Educational Research Journal, 15(3), 441–450. JSTOR.

Braskamp, L. A., Brown, R. D., & Newman, D. L. (1982). Studying evaluation utilization through simulations. Evaluation Review, 6(1), 114–126.

Bryson, J. M., & Patton, M. Q. (2015). Analyzing and engaging stakeholders. In K. E. Newcomer, H. P. Hatry, & J. S. Wholey (Eds.), Handbook of practical program evaluation (Fourth edition). Jossey-Bass & Pfeiffer Imprints, Wiley.

Bryson, J. M., Patton, M. Q., & Bowman, R. A. (2011). Working with evaluation stakeholders: A rationale, step-wise approach and toolkit. Evaluation and Program Planning, 34(1), 1–12.

Bundi, P., Frey, K., & Widmer, T. (2021). Does evaluation quality enhance evaluation use? Evidence & Policy: A Journal of Research, Debate and Practice, 17(4), 661–687.

Bundi, P., & Trein, P. (2022). Evaluation use and learning in public policy. Policy Sciences, 55(2), 283–309.

Campbell, D. T. (1971). Reforms as experiments. Urban Affairs Quarterly, 7(2), 133–171.

Carter, R. K. (1971). Clients’ resistance to negative findings and the latent conservative function of evaluation studies. The American Sociologist, 6(2), 118–124. JSTOR.

Chen, H. (2005). Practical program evaluation: Assessing and improving planning, implementation, and effectiveness. Sage.

Chouinard, J. A. (2013). The case for participatory evaluation in an era of accountability. American Journal of Evaluation, 34(2), 237–253.

Chouinard, J. A., & Cousins, J. B. (2012). Participatory evaluation up close: A review and integration of research-based knowledge. Charlotte, NC: Information Age Publishing.

Christie, C. A., & Alkin, M. C. (1999). Further reflections on evaluation misutilization. Studies in Educational Evaluation, 25(1), 1–10.

Connolly, T., & Porter, A. L. (1980). A user-focused model for the utilization of evaluation. Evaluation and Program Planning, 3(2), 131–140.

Cooksy, L. J., & Mark, M. M. (2012). Influences on evaluation quality. American Journal of Evaluation, 33(1), 79–84.

Coryn, C. L. S., Noakes, L. A., Westine, C. D., & Schröter, D. C. (2011). A systematic review of theory-driven evaluation practice from 1990 to 2009. American Journal of Evaluation, 32(2), 199–226.

Cousins, J. B. (2004). Commentary: Minimizing evaluation misuse as principled practice. American Journal of Evaluation, 25(3), 391–397.

Cousins, J. B., Goh, S. C., Elliott, C. J., & Bourgeois, I. (2014). Framing the capacity to do and use evaluation. New Directions for Evaluation, 2014(141), 7–23.

Cousins, J. B., Goh, S., Clark, S., & Lee, L. (2004). Integrating evaluative inquiry into the organizational culture: A review and synthesis of the knowledge base. The Canadian Journal of Program Evaluation, 19(2), 99–141.

Cousins, J. B., & Leithwood, K. A. (1986). Current empirical research on evaluation utilization. Review of Educational Research, 56(3), 331–364.

Cullen, A. E., & Coryn, C. L. S. (2011). Forms and functions of participatory evaluation in international development: A review of the empirical and theoretical literature. Journal of MultiDisciplinary Evaluation, 7(16), 32–47.

Daillak, R. H. (1982). What is evaluation utilization? Studies in Educational Evaluation, 8(2), 157–162.

Davidson, E. (2022). Evaluation methodology basics: The nuts and bolts of sound evaluation. Sage.

Donnelly, C., Letts, L., Klinger, D., & Shulha, L. (2014). Supporting knowledge translation through evaluation: Evaluator as knowledge broker. Canadian Journal of Program Evaluation / La Revue Canadienne d’évaluation de Programme, 29(1).

Durning, S. J., Hemmer, P., & Pangaro, L. N. (2007). The structure of program evaluation: An approach for evaluating a course, clerkship, or components of a residency or fellowship training program. Teaching and Learning in Medicine, 19(3), 308–318.

Fashola, J. B. (1989). Evaluation, feasibility and relevance. English for Specific Purposes, 8(1), 65–73.

Fleischer, D. N., & Christie, C. A. (2009). Evaluation use: Results from a survey of U.S. American Evaluation Association members. American Journal of Evaluation, 30(2), 158–175.

Forss, K., Rebien, C. C., & Carlsson, J. (2002). Process use of evaluations: Types of use that precede lessons learned and feedback. Evaluation, 8(1), 29–45.

Ginsburg, A., & Rhett, N. (2003). Building a better body of evidence: New opportunities to strengthen evaluation utilization. American Journal of Evaluation, 24(4), 489–498.

Gowda, D., Curran, T., Khedagi, A., Mangold, M., Jiwani, F., Desai, U., Charon, R., & Balmer, D. (2019). Implementing an interprofessional narrative medicine program in academic clinics: Feasibility and program evaluation. Perspectives on Medical Education, 8(1), 52–59.

Grack Nelson, A., & Schreiber, R. C. (2009). Participatory evaluation: A case study of involving stakeholders in the evaluation process. Visitor Studies, 12(2), 199–213.

Grasso, P. G. (2003). What makes an evaluation useful? Reflections from experience in large organizations. American Journal of Evaluation, 24(4), 507–514.

Greene, J. C. (2005). Stakeholders. In S. Mathison (Ed.), Encyclopedia of evaluation. Sage.

Heilman, J. G. (1983). Beyond the technical and bureaucratic theories of utilization: Some thoughts on synthesizing reviews and the knowledge base of the evaluation profession. Evaluation Review, 7(6), 707–728.

Henry, G. T., & Mark, M. M. (2003). Beyond use: Understanding evaluation’s influence on attitudes and actions. American Journal of Evaluation, 24(3), 293–314.

Hirsch, M. L., & Quartaroli, T. A. (2009). Many hats: The methods and roles of the program evaluator. Journal of Applied Social Science, 3(2), 73–80.

Hopson, R., & Horsford, S. (2015, September 7). WE Week: Rodney Hopson and Sonya Horsford on “But can you do it” questions of evaluator credibility and organizational capacity: The nuances of evaluator credibility [AEA365].

Jacobson, M. R., & Azzam, T. (2018). The effects of stakeholder involvement on perceptions of an evaluation’s credibility. Evaluation and Program Planning, 68, 64–73.

Jephson, M. B. (1992). The purposes, importance, and feasibility of program evaluation in community-based early intervention programs. Journal of Early Intervention, 16(3), 252–261.

Johnson, K., Greenseid, L. O., Toal, S. A., King, J. A., Lawrenz, F., & Volkov, B. (2009). Research on evaluation use: A review of the empirical literature from 1986 to 2005. American Journal of Evaluation, 30(3), 377–410.

King, J. A. (1988). Research on evaluation use and its implications for evaluation research and practice. Studies in Educational Evaluation, 14(3), 285–299.

King, J. A., & Stevahn, L. (2013). Interactive evaluation practice: Mastering the interpersonal dynamics of program evaluation. SAGE.

Kirkhart, K. E. (2000). Reconceptualizing evaluation use: An integrated theory of influence. New Directions for Evaluation, 2000(88), 5–23.

Kirkhart, K. E., Caracelli, V. J., & Preskill, H. (2000). The expanding scope of evaluation use. In Reconceptualizing evaluation use: An integrated theory of influence (Issue 88, pp. 5–23). Jossey-Bass San Francisco.

Lawrenz, F., Gullickson, A., & Toal, S. (2007). Dissemination: Handmaiden to evaluation use. American Journal of Evaluation, 28(3), 275–289.

Leviton, L. C., & Hughes, E. F. X. (1981). Research on the utilization of evaluations: A review and synthesis. Evaluation Review, 5(4), 525–548.

Luukkonen-Gronow, T. (1989). The impact of evaluation data on policy determination. In D. Evered & S. Harnett (Eds.), The evaluation of scientific research. John Wiley and Sons.

Mathison, S. (1991). Role conflicts for internal evaluators. Evaluation and Program Planning, 14(3), 173–179.

Morabito, S. M. (2002). Evaluator roles and strategies for expanding evaluation process influence. American Journal of Evaluation, 23(3), 321–330.

Odera, E. L. (2021). Capturing the added value of participatory evaluation. American Journal of Evaluation, 42(2), 201–220.

Olejniczak, K. (2017). The game of knowledge brokering: A new method for increasing evaluation use. American Journal of Evaluation, 38(4), 554–576.

Olejniczak, K., Raimondo, E., & Kupiec, T. (2016). Evaluation units as knowledge brokers: Testing and calibrating an innovative framework. Evaluation, 22(2), 168–189.

Patton, M. Q. (1998). Discovering process use. Evaluation, 4(2), 225–233.

Patton, M. Q. (2005). Misuse of evaluations. In S. Mathison (Ed.), Encyclopedia of evaluation. Sage.

Patton, M. Q. (2012). Essentials of utilization-focused evaluation. SAGE.

Pattyn, V., & Bouterse, M. (2020). Explaining use and non-use of policy evaluations in a mature evaluation setting. Humanities and Social Sciences Communications, 7(1), 85.

Peck, L. R., & Gorzalski, L. M. (2009). An evaluation use framework and empirical assessment. Journal of MultiDisciplinary Evaluation, 6(12), 139–156.

Peterman, K., & Gathings, M. J. (2019). Using a community-created multisite evaluation to promote evaluation use across a sector. Evaluation and Program Planning, 74, 54–60.

Picciotto, R. (2011). The logic of evaluation professionalism. Evaluation, 17(2), 165–180.

Rallis, S. F., & Rossman, G. B. (2000). Dialogue for learning: Evaluator as critical friend. New Directions for Evaluation, 2000(86), 81–92.

Riecken, H. W., & Boruch, R. F. (1974). Social experimentation: A method for planning and evaluating social intervention. Academic Press.

Ripley, W. K. (1985). Medium of presentation: Does it make a difference in the reception of evaluation information? Educational Evaluation and Policy Analysis, 7(4), 417–425.

Rodriguez-Campos, L. (2011). Stakeholder involvement in evaluation: Three decades of the American Journal of Evaluation. Journal of MultiDisciplinary Evaluation, 8(17), 57–79.

Rogers, P. (2018a, January 25). 7 Strategies to improve evaluation use and influence—Part 1.

Rogers, P. (2018b, January 25). 7 Strategies to improve evaluation use and influence—Part 2.

Ruhe, V., & Boudreau, J. D. (2013). The 2011 Program Evaluation Standards: A framework for quality in medical education programme evaluations. Journal of Evaluation in Clinical Practice, 19 5, 925–932.

Russ-Eft, D. F., Bober, M. J., de la Teja, I., Foxon, M., & A. Koszalka, T. (2008). Evaluator competencies: Standards for the practice of evaluation in organizations (1st ed). Jossey-Bass.

Rutman, L. (1982). Dimensions of utilization and types of evaluation approaches. Studies in Educational Evaluation, 8(2), 163–168.

Sanders, J. R. (1994). The program evaluation standards: How to assess evaluations of educational programs. Sage.

Schwandt, T. A. (2001). Responsiveness and everyday life. New Directions for Evaluation, 2001(92), 73–88.

Scriven, M. (1991). Evaluation thesaurus (4th ed). Sage.

Shulha, L. M., & Cousins, J. B. (1997). Evaluation use: Theory, research, and practice since 1986. Evaluation Practice, 18(3), 195–208.

Stake, R. E. (2000). Program evaluation, particularly responsive evaluation. In D. L. Stufflebeam, G. F. Madaus, & T. Kellaghan (Eds.), Evaluation models: Viewpoints on educational and human services evaluation (pp. 343–362). Springer Netherlands.

Stevahn, L., Berger, D. E., Tucker, S. A., & Rodell, A. (2020). Using the 2018 AEA Evaluator Competencies for effective program evaluation practice. New Directions for Evaluation, 2020(168), 75–97.

Stevahn, L., King, J. A., Ghere, G., & Minnema, J. (2005). Establishing essential competencies for program evaluators. American Journal of Evaluation, 26(1), 43–59.

Stufflebeam, D. L. (2003). Professional standards and principles for evaluations. In T. Kellaghan & D. L. Stufflebeam (Eds.), International handbook of educational evaluation (pp. 279–302). Springer Netherlands.

Taut, S. M., & Alkin, M. C. (2003). Program staff perceptions of barriers to evaluation implementation. American Journal of Evaluation, 24(2), 213–226.

Turnbull, B. (1999). The mediating effect of participation efficacy on evaluation use. Evaluation and Program Planning, 22(2), 131–140.

Vedung, E. (2000). Public policy and program evaluation (1st ed.). Routledge.

Vo, A. (2015). Foreward. In C. Christie & A. Vo (Eds.), Evaluation use and decision-making in society: A tribute to Marvin C. Alkin (pp. vii–xvii). Information Age Publishing.

Wanzer, D. L. (2021). What is evaluation? Perspectives of how evaluation differs (or not) from research. American Journal of Evaluation, 42(1), 28–46.

Weiss, C. H. (1972). Evaluation research: Methods for assessing program effectiveness. Prentice-Hall.

Westbrook, T. R., Avellar, S. A., & Seftor, N. (2017). Reviewing the reviews: Examining similarities and differences between federally funded evidence reviews. Evaluation Review, 41(3), 183–211.

White, J., National Academies of Sciences, Engineering, and Medicine (U.S.), National Academies of Sciences, Engineering, and Medicine (U.S.), & National Academies of Sciences, Engineering, and Medicine (U.S.) (Eds.). (2017). Principles and practices for federal program evaluation: Proceedings of a workshop. The National Academies Press.

Wholey, J. S. (1994). Handbook of practical evaluation. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Assessing the feasibility and likely usefulness of evaluation (pp. 15–39). Jossey-Bass Publisher.

Wingate, L. A. (2009). The Program Evaluation Standards applied for metaevaluation purposes: Investigating interrater reliability and implications for use.

Wolf, R. M. (1990). A framework for evaluation. In H. J. Walberg & G. D. Haertel (Eds.), The international encyclopedia of educational evaluation (1st ed). Pergamon Press.

Yarbrough, D. B. (Ed.). (2011). The program evaluation standards: A guide for evaluators and evaluation users (3rd ed). SAGE Publications.

Zorzi, R., Perrin, B., Mcguire, M., Long, B., & Lee, L. (2002). Defining the benefits, outputs, and knowledge elements of program evaluation. The Canadian Journal of Program Evaluation, 17(3), 143–150.