The Expert:

Luis Villa

Dr Luis Villa is the manager of the Ko Awatea Research and Evaluation Office. He has a master’s degree in public health from the University of Otago and is an honorary senior lecturer for the University of Auckland’s Master Programme in Global Health. Before joining Ko Awatea as evaluation manager in 2014, Luis gained over 20 years’ experience managing health programmes internationally, including working with the World Health Organisation and Doctors Without Borders.

Their View:

Did the butterfly really cause the hurricane? Here is a simple (and simplistic) way of checking on attribution.

Attribution is one of the most controversial issues when deciding on evaluation objectives and evaluation questions. Attribution means ‘to regard as resulting from a specified cause’. In evaluation attribution relates to defining the outcome or impact as directly caused by the intervention(s).

In healthcare system interventions, we tend to be very ambitious and optimistic in the way we deal with attribution. This can lead to frustration when the evaluation is rendered inconclusive because false outcomes are attributed to the intervention.

This excessive optimism can be a consequence of the pressure that health systems put on healthcare teams to constantly increase efficiency. Any intervention, however small, is expected to produce spectacular outcomes to justify itself. This can bias evaluation design, as the evaluation questions are often formulated with the expectation of a positive result.

I propose a simple (and simplistic) trick to help teams think about attribution: would the same causal explanation stand if the results were negative?

For example, the Research and Evaluation Office was asked to assess improvements in the management of diabetes for patients with poor clinical outcomes, where the intervention was telephone support to general practitioners (GPs) provided by hospital specialists. Would it be reasonable to say that a hospital specialist supporting GPs could be the cause of a worsening of diabetes management at patient level?

In another example, we were asked to assess changes in health behaviour in youths. The intervention was a Facebook page where a virtual community of teenagers could exchange information about health-related behaviours. Would it be reasonable to say that a Facebook community could bring about a negative change in health behaviours of youths?

Although we can see how these negative outcomes could eventually happen if all things in the intervention (and probably more outside the intervention) went wrong, it is very unlikely that the interventions themselves would cause such negative outcomes. Therefore, we cannot conclude that positive outcomes are necessarily caused by the intervention, either.

In reality, an intervention may improve the way a healthcare system functions without directly affecting patient outcomes. This does not mean the intervention has no value. However, it may mean that trying to prove a direct effect on patient outcomes is too large a leap and we need to scale back the outcomes we are trying to demonstrate when we evaluate an intervention.

Last modified: