Enroll in the Certificate in Public Health Management

Interpreting Research Studies

Research is critical to evaluating and monitoring public health programs. However, not all research studies are scientifically sound or reliable. Thus, when interpreting research studies it is important to consider each study’s survey methodology, data quality, and validity.  It is also important to determine if the study involved a control or baseline group.

Methodology

The design of research studies is essential to ensure quality data and reliable results. Many research studies use survey questions to gather information. However, oftentimes researchers ask biased questions, or they enter the research study with a biased viewpoint which can elicit a certain response from the study participants.  Many studies also include leading questions which suggest a desired answer and therefore introduce bias into the study.  For example, a study which intends to evaluate villagers’ perceptions of researchers in a community may ask this type of question: “Are the researchers helping your community?”  However, this question leads the respondents to agree that good work is being done by the researchers and may not reflect how they actually perceive the researchers.  A question that more accurately reflect the beliefs of the villagers would be “how do you feel about the work the researchers are doing in your community?”  With this question structure, the respondents are not prompted to respond in a certain manner.  Since leading questions can introduce bias into a study and lead to inaccurate conclusions, it is important for public health professionals to analyze the types of questions included in a study before acting upon the data. Results from studies which use leading questions may not be reliable and should be looked upon with skepticism.

Data Quality

Quality data is critical to assessing the global burden of disease and developing public health initiatives. To ensure quality data, data must be managed correctly from the time of collection until the time of analysis. Unfortunately, there are many challenges with obtaining quality data in resource-poor settings. Many resource-poor countries have a lack of reliable civil registration systems which record births and deaths and generate other statistics.  In order to ensure the reliability of a study, it is important to assess how the data was collected, as well as the source of the data.  Lack of infrastructure in resource-poor settings may also reduce the completeness and accuracy of the collected data, while the lack of technology and data management infrastructure presents a challenge to data management and collection. All of these factors must be considered when assessing the quality and reliability of a research study.

Validity

Though it is often assumed that a study’s results are valid or conclusive, this is unfortunately not always the case. Researchers who conduct scientific studies are often motivated by external factors, such as the desire to get published, advance their careers, receive funding, or seek certain results. As a consequence, a significant number of scientific studies are biased and unreliable.  It is important to be able to distinguish between reliable studies and those that are poorly designed, poorly implemented, or inconclusive. Reliable studies use random samples whenever possible, utilize appropriate sample sizes, avoid biases, and should be conducted by researchers who are not influenced by funding or the desire to seek certain results.

Randomization

Randomization in studies is critical to ensuring the validity of research. Randomized trials in the clinical setting assign groups of randomly chosen individuals to either receive a medical intervention, or to receive a placebo.   In order to avoid bias, participation in each group is determined randomly before the trials commence.  The goal of randomization is to produce comparable groups in terms of participant demographics in order to conclude if an intervention had an effect. Studies which are not randomized are less likely to be reliable, and their conclusions should be looked upon with uncertainty.

Sample Size

When interpreting a study, it is also important to assess the sample size. In general, larger sample sizes produce results that have increased precision and statistical power and are therefore more conclusive. If a sample size is very small, results are likely to not be statistically significant and will be influenced by outliers. Thus, results from studies with small sample sizes should not be extrapolated to a larger population or be used to make broad conclusions.

Use of a Control or Baseline Group

Lastly, when assessing a research study it is important to consider whether the study employed a baseline or control group for comparison. Without this essential component, it is unclear if the results seen were caused by an intervention, or if they would have occurred otherwise. For example, a study that wishes to evaluate the effects of a teacher training program on students’ test scores could simply compare the test scores before and after the program. However, such an evaluation does not take into account other variables which may have caused test scores to change without the program.(1) For this reason, a rigorous study must include a control group or baseline. A credible comparison group is essential in order to determine the full impact of a program or intervention.  Results from studies which do not employ a comparison or control should be treated with skepticism.

Footnotes

(1) Clemens, M., and Demombynes, G. “When Does Rigorous Impact Evaluation Make a Difference? The Case of the Millennium Villages.” Center for Global Development. Accessed on 18 November 2010.