Module 9: Institutionalizing Evaluation

There is a growing need for accountability of government funds budgeted for development programs. Taxpayers and government officials are interested in knowing exactly how money is being spent and what impact is being made. One strategy to improve accountability for government funds is enforcing program evaluation. Evaluations detail program inputs, outputs, and the outcomes and impacts that track the use of such funds. However, the consistency of rigorous evaluations at the level of outcomes and impacts is limited, as conducting evaluations often relies upon availability of data, funds, and the interest of donors and program management.(1) Despite the attempt at creating more stringent guidelines for conducting evaluations, the NGOs and development agencies that perform evaluations often do so with inadequate or poor quality data. With these challenges in mind, many countries have begun to institutionalize monitoring and evaluation systems managed by the government.  As of 2009, this process had only been initiated in a handful of developing countries, mostly in Latin America.(2)

In order for a government to institutionalize a monitoring and evaluation system, two key structures must exist:

To exemplify the two criteria, the case study below provides an overview of the monitoring and evaluation system institutionalized in Mexico. For more examples, the World Bank and Inter-American Development Bank have published a number of high-quality case studies of countries with institutionalized monitoring and evaluation systems, available online: Towards the Institutionalization of Monitoring and Evaluation Systems in Latin America and the Caribbean.

Case Study: Institutionalization of a Monitoring and Evaluation System in Mexico

The case of M&E in Mexico highlights the key aspects of institutionalization of evaluation. During a presentation on the success of Mexico’s evaluation programs, the director of social programs, Gonzalo Hernandez, spoke on a number of contextual factors that allowed for institutionalization in 2000-2001. Mexico funded social development programs throughout the 1990s, but no evaluations were made of these programs due to a lack of funds, demand, or political will. In 1997, Progresa (Oportunidades), a well-known and highly effective national social development program, implemented a large-scale impact evaluation with funding from the Inter-American Development Bank (IADB) that required evaluations from all grantees. This established a standard of quality evaluations and also created a foundation for a new national policy.

In 1999, Congress declared that all government programs would be subject to annual evaluation. This change was intended to improve accountability of program actions and counter previous attempts by presidential candidates to buy votes through the creation of social programs that were unsustainable and not beneficial in the long-run.(4) The creation of CONEVAL (Consejo Nacional de Evaluación de la Política de Desarrollo Social) created an “evaluation culture” by normalizing the process of evaluation among all social development programs. CONEVAL created guidelines that standardized evaluations, making them more useful for policy-makers and program developers, and improved the country’s capacity to facilitate and manage evaluations.(5) With regard to transparency and government accountability, the new law required all evaluation findings to be released to the public. With this added layer of oversight, both evaluations and recommendations would be tracked to ensure improvements in implementation strategies.(6)

In accordance with the two key structures mentioned above, the government of Mexico fostered the political will to institutionalize evaluation with an invested interest in social development programs. Prior to policy change, government funds had been released over time, with most programs only referring to outputs, not outcomes, for the purposes of evaluation.(7) When Congress became involved with institutionalizing evaluation procedures for all government programs, the policy was widely accepted among leading decision-makers. Second, through the creation of CONEVAL, the government was able to build substantial evaluation capacity through various strategies: holding seminars for government officials to increase knowledge and measurement techniques, coordinating among ministries and programs to foster the exchange of ideas, and maintaining a public evaluation database for increased knowledge and collaboration. With such developments, CONEVAL was able to contribute to more robust and high-quality evaluation capacity for Mexico’s M&E program.(8)

Go To Module 10: Distinguishing Evaluation from Research >>

Footnotes

(1) Independent Evaluation Group (IEG). (2009). Institutionalizing impact evaluation within the framework of a monitoring and evaluation system. Washington: DC: The International Bank for Reconstruction and Development/The World Bank.

(2) Ibid.

(3) 3ie Global Development Network. (2009). Impact evaluation: How to institutionalize evaluation? Vasant Kunj Institutional Area, New Delhi: 3ie Impact.

(4) May, E., Shand, D., Mackay, K., Rojas, F. and Saavedra, J. (2006). Towards the institutionalization of monitoring and evaluation systems in Latin America and the Caribbean: Proceedings of a World Bank/Inter-American Development Bank conference. Washington, DC: The International Bank for Reconstruction and Development / The World Bank.

(5) The World Bank. (n.d.). Results of the expert roundtables on innovative performance measurement tools.

(6) Briceno, B and Gaarder, M.M. (2009). Institutionalizing evaluation: Review of international experience. New Delhi, India: International Initiative for Impact Evaluation Global Development Network.

(7) Ibid.

(8) Consejo Nacional de Evaluación de la Política de Desarrollo Social. (n.d.). Functions.