Metrics and Evaluation

Outcomes and Outputs

Traditionally, metrics in global health and development work have tended to look at outputs (number of people trained), but are now increasingly looking at outcomes (changes in behavior as a result of the training).(1)  The term ‘impact’ refers to the results that donors ultimately seek as a consequence of the outcomes (jobs created, people lifted out of poverty).  While needed the most, outcomes and impact are difficult metrics to measure and report in meaningful ways.

There is an important difference between outputs and outcomes, and it is crucial that global health interventions are evaluated based on their outcomes.  For example, an output would be the number of individuals enrolled in a job training program.  While sometimes this fact can be a proxy for an outcome or impact, it is not necessarily synonymous with the impact itself.  In this case, the number of people enrolled is not a valuable measurement if none of those enrolled in the training program end up obtaining a job. 

Therefore, in this scenario, the more important metric is the outcome regarding how many people enrolled in the job training program now have a job. It is critical to articulate the difference between outcomes and outputs when designing a measurement system and reporting results, lest activities done with the intent of creating results be confused with the results themselves.

Causality

In addition to outcomes and outputs, causality is an important concept in monitoring and evaluating global health work. For example, in the job training case, how do we know that participants have a job specifically due to the training program and not some outside factor?  It is often not sufficient to report the output and outcome; organizations must show that the output is the cause of the resulting impact.

There is an expectation that inputs such as funding and human resources will ultimately lead to developmental impacts.  “Thus, the core task in measuring impacts is to ‘establish the counter-factual’: to discover what would have happened if the intervention had not taken place at all.”(2)   Often when organizations report outcomes, they imply that ‘without our intervention, this would not have happened’.  Evaluation models should always be clear about causality: what is expected to happen as a direct result of the intervention.  In practice, causality is difficult to assess due to a variety of reasons outlined below: (3)

Monitoring vs. Evaluation

Monitoring and evaluation of global health and development activities provides NGOs, governments, and donors with a means to learn from past experience, improve interventions, allocate resources, and demonstrate results.  Within the global health and development community there has been a strong push to focus on results, yet there is often confusion about how to measure the impact of programs and interventions.  The tools of monitoring and evaluation offer a solution and include the following: (4)

Often used synonymously, monitoring and evaluation in fact refer to two different activities.  Monitoring involves on-going measurement of performance, examining parameters such as cost efficacy, and assessing whether things are going well.  Ideally, these measurements are conducted and reviewed locally by the intervention team and can lead to improvements throughout the implementation stage.  Monitoring can be formally defined as "a continuing function that uses systematic collection of data on specified indicators to provide management and the main stakeholders of an ongoing development intervention with indications of the extent of progress and achievement of objectives and progress in the use of allocated funds”.(5) Therefore, monitoring involves the regular tracking of resources, activities, and outputs at the project, regional, national, or global level. This includes the monitoring of a country’s progress against the millennium development goals, or other standard measures of success.

Evaluation, on the other hand, is often conducted by external advisers, and is focused on proving impacts rather than improving interventions.  Evaluation seeks to answer questions about whether the right things are being done, and is a measure of the performance of the program design, of the implementing team, and of the implementing agency.(6)  Evaluation can be defined as“the process of determining the worth or significance of a development activity, policy or program ….. to determine the relevance of objectives, the efficacy of design and implementation, the efficiency or resource use, and the sustainability of results.  An evaluation should (enable) the incorporation of lessons learned into the decision-making process of both partner and donor”. (7) Although different, monitoring and evaluation are complementary.  Monitoring information is a necessary part of identifying best practices, yet does not provide a sufficient nor holistic assessment.  Monitoring information is used for ongoing management purposes, but solely relying on monitoring data can skew the picture as it often covers only specific dimensions of a project’s or program’s activities. (8)

In order to obtain the full picture, evaluation is needed to provide a more balanced interpretation of an intervention’s impact. However, evaluation is a more detailed and time-consuming activity and is thus conducted less frequently. Organizations therefore often rely on monitoring information to identify potential problems that require more detailed investigation via evaluation.

Go To Module 2: Evaluation Approaches: Case Studies>>

Footnotes

(3) Ibid.

(5) Organization for Economic Co-Operation and Development: Development Cooperation Directorate (DCD-DAC). 2002. Glossary of Key Terms in Evaluation.

(7) Organization for Economic Co-Operation and Development: Development Cooperation Directorate (DCD-DAC). 2002. Glossary of Key Terms in Evaluation.