Module 5: Process Evaluation

Overview and Benefits of Conducting a Process Evaluation

A process evaluation focuses on the implementation process and attempts to determine how successfully the project followed the strategy laid out in the logic model.(1) As opposed to outcome or impact evaluations, a process evaluation focuses on the first three segments of the logic model (inputs, activities, and outputs) and how they work together. Process evaluations allow evaluators to make the important distinction between implementation failure and theory failure. Implementation failure is the lack of expected results due to poor implementation practices, such as unmet targets due to an insufficient number of trained community health workers or breakdowns in transportation of medication to the clinic. Theory failure is when program activities are implemented to the standards of the program design strategy but expected outcomes are not found, meaning the theory that linked the activities to expected outcomes is incorrect. For instance, the proper implementation of caretaker training on oral rehydration salt administration, coupled with unchanged incidence of acute diarrhea, would be classified as theory failure.(2)

Although traditionally emphasis is placed on impact and outcome evaluations because of their importance in the funding streams of NGOs and development agencies, there are significant benefits to conducting a process evaluation. As a checkpoint for program implementation, process evaluation ensures the program is delivered according to design. If any standards are not met, findings at this stage of evaluation can save subsequent time and funding.(3) In addition, a process evaluation creates a feedback loop by including routine assessments such as a documentation of resources used, measurement of output indicators, and tracking of project reach among the target population. Because these activities are done routinely throughout the timeline of the project, data collection and analysis may reveal early challenges of the program. This allows implementers to alter program activities accordingly and, hopefully, improve the chances of positive outcomes.(4) Lastly, process evaluation allows evaluators and program developers to pinpoint strengths and weaknesses within the program design and improve upon the program in future scale-up efforts.(5)

Measuring every detail of program implementation is not feasible or necessary, especially in large-scale projects.(6) Process indicators should be developed based on the priorities of the stakeholders, clients, and evaluators, while relating back to the original evaluation question and the key processes highlighted in the logic model. For example, if the program outcomes are highly dependent upon the establishment of a supply chain, it may be beneficial for evaluators to appraise the implementation of the supply chain. In this framework, indicators may include timeliness, utilization, and frequency of stock outs in the clinic.

For the purpose of this explanation, it will be beneficial to use an example of a pre-existing logic model. This example is a World Health Organization (WHO) logic model for a program seeking to improve the nutrition of the target audience in relation to the Millennium Development Goals. (Source, WHO, 2011): http://www.who.int/vmnis/toolkit/WHO-CDC_Logic_Model_en.pdf.

Measurement in Process Evaluation

Developing Process Evaluation Indicators

There are four general concepts that should be measured for a process evaluation: the types of inputs, activities, and outputs, and the integration of these components.(7) For each target, evaluators will develop a number of indicators. Indicators are used as measurement guidelines, flowing directly from the logic model.(8)(9)

To demonstrate the process of developing indicators for process evaluation, it is helpful to use a sample logic model (refer to link above). In this example, the inputs and outputs are clearly labeled, and one  of the ‘Delivery’ activities is to develop a delivery system. It can be assumed that many of the inputs listed will be required to run this activity, such as financial resources, management, and staff. The expected output of this activity is “access to or presence of the intervention in the target communities or facilities.” As such, a basic indicator to measure the success of the output would be a binary (yes or no) indicator for the existence of an intervention in the target community, at a time specified in the program timeline. This is a quality indicator, meeting all of the requirements in specificity, measurability, attainability, relevance, and timeliness. The existence of a functional delivery system also serves as a process indicator. If evaluators are interested in whether the system of delivery is developed according to the program timeline or human personnel are properly trained, corresponding indicators would include “timeliness of system completion” and “ratio of staff citing preparedness to total number of staff trained.” Again, most of process evaluation planning depends on the stakeholder and client priorities which, ideally, are discussed prior to the planning of the evaluation.

Determining Baseline

Once process indicators have been defined, evaluators may collect data prior to the start of the intervention to determine the baseline for each indicator. This data will show the current situation in the area of interest (e.g. current existence of delivery systems, quality of supply chains, volume of trained staff).(10) This is important because it allows for the program staff to create realistic visions of target indicators. Also, measuring indicators only after the intervention has been completed does not allow evaluators to show what the program activities accomplished, reducing the likelihood of scale-up or continued funding.(11) For more information on baseline and pre-intervention measurements, refer to the Evaluation Study Designs module.

Target Indicators

After all data on relevant indicators have been collected post-intervention, evaluators must compare the values to a target, goal, or standard indicator in order to give meaning to evaluation. Target indicators are established in coordination with the program development team, depending on baseline indicator information. A common oversight in this step is to set indicator targets without referring first to baseline information. When this happens, extremely high or low indicator targets may overemphasize the quality of the program, creating misleading information.(12)(13) The program development team may have pre-determined targets for activities and outputs required for the program to be considered successful. Target indicators can be used as comparison tools throughout the monitoring and evaluation process for mid- and post-intervention measurements. This gives the evaluators an opportunity to convey progress to clients and stakeholders throughout the implementation period.(14)

Go To Module 6: Impact Evaluation >>

Footnotes

(1) United States Agency for International Development [USAID]. (2009). An evaluation framework for USAID-funded TIP prevention and victim protection programs. Social Transition Team, Bureau for Europe and Eurasia.

(2) Weiss, C.H. (1997). Theory-based evaluation: Past, present, and future. New Directions for Evaluation, 76:41-55.

(3) Saunders, R.P., Evans, M.H., Joshi, P. (2005). Developing a process-evaluation plan for assessing health promotion program implementation: A how-to guide. Health Promotion Practice, 6(2):134-147.

(4) Centers for Disease Control and Prevention. (2008). Introduction to process evaluation in tobacco use prevention and control. Atlanta, GA: U.S. Department of Health and Human Services, Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health.

(5) Issel, L.M. (2009). Health program planning and evaluation: A practical, systematic approach for community health (2nd ed.)  Sudbury, MA: Jones and Bartlett Publishers.

(6) Issel, L.M. (2009).

(7) Centers for Disease Control and Prevention. (2008).

(8) Kusek, J.Z. and Rist, R.C. (2004). Ten steps to a results-based monitoring and evaluation system. Washington, DC: The International Bank for Reconstruction and Development/The World Bank.

(9) Centers for Disease Control and Prevention. (2008).

(10) Kusek, J.Z. and Rist, R.C. (2004).

(11) United Nations World food Programme. (n.d.). Monitoring and evaluation guidelines: How to plan a baseline study. Rome, Italy: UNWFP Office of Evaluation and Monitoring.

(12) United Nations Population Fund. (2004). Programme manager’s planning, monitoring and evaluation toolkit. New York, NY: UNFPA Division for Oversight Services.

(13) Kusek, J.Z. and Rist, R.C. (2004).

(14) Bamberger, M., Rugh, J., and Mabry, L. (2006). Real world evaluation: Working under budget, time, data, and political constraints. Thousand Oaks, CA: Sage Publications, Inc.