Main Article Content
Background: This paper presents an in-depth international comparison of systems and procedures of aid evaluation, focusing on Country Program Evaluation among major donor agencies. The original client of this study is Ministry of Foreign Affairs, Japan (MOFAJ).
Purpose: The purposes of this paper are set as follows: (1) to understand how aid agencies conduct Country Program Evaluation; and (2) to make recommendations for improvement of the current practice of Country Program Evaluation in the aid evaluation community.
Setting: The examined donors include: the World Bank (WB), the Asian Development Bank (ADB), the Inter-American Development Bank (IADB), the United Nations Development Programme (UNDP), the U.S. (USAID), Canada (CIDA), the U.K. (DFID), the Netherlands (IOB), Germany (BMZ), France (Foreign Ministry), and Japan (Ministry of Foreign Affairs (MOFAJ)). In addition, aid agencies conducting respective project evaluation are also examined, and they are JICA (Japan), GTZ and KfW (Germany) and AFD (France).
Intervention: This study presents the result of comparative analysis among those donor agencies in terms of the following viewpoints: (1) evaluation criteria employed; (2) approaches to evaluate “effectiveness” and “impact”; (3) attribution issue; (4) the use of a rating system;
and (5) overall evaluative conclusion and integrating methods. All viewpoints are focusing on Country Program Evaluation. One conclusion is that most agencies have been struggling with how to judge the degree and value of their country programs.
Data Collection and Analysis: Mixed methodologies were employed to collect data from the said donor agencies. The analysis was conducted by a systematic procedure consisting of: (i) summarizing information in a comparative table; (ii) trying to make groups/categories based on common characteristics if possible; and (iii) examining and concluding basic thoughts/philosophy which make their differences.
Findings: This study made some new knowledge about how aid agencies conduct Country Program Evaluation and identified several issues remained. Varieties of their practices are observed and it is far from the unified methods agreed. Some remarkable points identified in this study are:(1) Most aid agencies invoke the DAC five evaluation criteria for Country Program Evaluation. (Major exception was USAID); (2) “Strategic relevance” and “coherence/complementarity” are the emerging new criteria; (3) Attribution is still the issue that aid agencies have struggled; and (4) The attitude for introduction of rating system is clearly divided among aid agencies.
Copyright and Permissions
Copyright for articles published in JMDE is retained by their authors under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY - NC 4.0). Users are allowed to copy, distribute, and transmit the work in any medium or format for noncommercial purposes provided that the original authors and source are credited. Only the original authors may distribute the article for commercial or compensatory purposes. To view a copy of this license, visit creativecommons.org