kirstyevidence

Musings on research, international development and other stuff

Evaluation: using the right tool for the job

6 Comments

screwdriver

In response to my last post, I got a couple of comments (thanks @cashley122  and @intldogooder!) that were so good, I decided to devote a whole post to responding to them. Both commenters were pointing out the tendency of some evaluators to approach a problem with a specific tool – rather than first figuring out what the right question to ask is, and then designing a tool to fit. They were referring to people who want to evaluate every problem with an RCT – but it is just as much of a problem when evaluators approach every question with a specific qualitative approach – a phenomenon which is discussed in this recently published paper by my former colleagues Fran Deans and Alex Ademokun. The paper is an interesting read – it analyses the proposals of people who applied for grant money to evaluate evidence-informed policy. It reveals that many applicants suggested using either focus groups or key-informant interviews – not because these were considered to be the best way to find out how evidence-informed a policy making institution was – but simply because these were the ‘tools’ which the applicants knew about.

I have been reflecting on these issues and thinking about how we can improve the usefulness of evaluations. So, today’s top tips are about using the right tool for the job.  I have listed three ideas below – but would be interested in other suggestions…

1. Figure out what question you want to answer

The point of doing research is, generally, to answer a question, and different types of question can be answered with different types of method. So the first thing you need to figure out is what question you want to ask. This sounds obvious but it’s remarkable how many people approach every evaluation with essentially the same method. There are countless stories of highly rigorous experimental evaluations which have revealed an accurate answer to completely the wrong question!

2. Think (really think) about the counterfactual

A crucial part of any evaluation is considering what would have happened if the intervention had not happened. Using an experimental approach is one way to achieve this – but it is often not possible. For example, if the target of your intervention is a national parliament, you are unlikely to be able to get a big enough sample size of parliaments to randomise them to treatment and control groups in order to compare what happens with or without the intervention. But this does not mean that you should ignore the counterfactual – it just means you might need to be more creative. One approach would be to compare the parliament before and after the intervention and combine this with some analysis of the context which will help you assess potential alternative explanations for change. A number of such ‘theory-based’ approaches are outlined in this paper on small n impact evaluations.

To strengthen your before/after analysis further, you could consider adding in one or more additional variables which you would not expect to change due to your intervention but which would change as a result of some other confounders. For example, if you were implementing an intervention to increase internet searching skills, you would not expect skills in formatting Word documents to increase. If both variables increased, it might be a clue that the change was due to a confounding factor (e.g. the parliament had employed a whole lot of new staff who were much more computer literate). This approach (which has the catchy title of ‘Nonequivalent Dependent Variables Design‘) can add an additional level of confidence to your results.

The point is not that these approaches will be perfect – it is not always easy to demonstrate the impact of a given intervention – but just because a ‘perfect’ design is not possible does not mean that it’s not worth trying to come up with a design that is as good as possible.

3. Think about the inputs as well as the outputs

Many evaluations set out to ask ‘Does this intervention work in this setting?’. Of course this is a really important question to ask – but development funders usually also want to know whether it works well enough to justify the amount of money it costs. I am well aware that nothing is more likely to trigger a groan amongst development types than the words ‘Value for Money’ – but the fact is that much development work is funded by my Nanna’s tax dollars* and so we have a duty to make sure we are using it wisely (believe me, you wouldn’t want to get on the wrong side of my Nanna).

So, how do you figure out if something is worth the money? Well, again, it is not an exact science, but it can be really useful to compare your intervention with alternative ways of spending the funds and what outcomes these might achieve. An example of this can be found in section 5.1 of this Annual Review of a DFID project which compares a couple of different ways of supporting operational research capacity in the south. A really important point (also made in this blog) is that you need to consider timescales in value for money assessments – some interventions may take a long time – but if they lead to important, sustained changes, they may offer better value for money than superficial quick wins.

.

*Just to be clear, it is not that my Nanna bankrolls all international development work in the world. That would be weird. But I just wanted to make the point that the money comes from tax payers. Also, she doesn’t pay her taxes in dollars but somehow tax pounds doesn’t sound right so I used my artistic license.

Advertisements

6 thoughts on “Evaluation: using the right tool for the job

  1. ‘Value for money’ is another way of equating the benefits of the intervention with some form of measure acceptable to a wider group, so tax dollars for example. You don’t specifically mention benefits in your post. To me these are the key reasons for intervening in the first place and evaluation needs to determine whether the benefits were a) agreed before the intervention b) whether they were the ‘right’ ones as perhaps better ones arose c) whether the outcomes were appropriate to achieve the agreed/’right’ benefits d) whether the evidence available substantiated the outcomes etc

  2. So glad you wrote this post. Quoting Phil Davies (3ie) evaluators should be concerned about the ‘appropriateness of evidence’, rather than the hierarchy of evidence. This is the only reason why the emphasis on RCTs worries me, because I have noticed how it has occasionally diverted the sector’s attention directly to tools rather than first thinking about the research questions and what methods are more suited to answer them.

  3. Great post, totally agree on the need to use tools appropriate to the job – and the need to eliminate bias in the types of tools used. A soon to be published paper by Dr James Copestake (I can send you a copy) considers the prevalent bias towards confirmatory approaches in impact assessment, rather than using more open-ended exploratory tools. When thinking (really thinking) about the counterfactual, one of the trickiest things to unpick is what confounding variables may have impacted your outcomes, and how much success or failure can be attributed to YOUR project and how much to a multitude of external factors. James Copestake’s ART project at the Centre for Development Studies (http://tinyurl.com/mu3zxbk) is attempting to address this attribution problem through a new qualitative impact assessment protocol (QUIP – http://www.bath.ac.uk/cds/documents/art_quip_draft.pdf) which gathers feedback from project beneficiaries using an exploratory rather than confirmatory approach. This means ‘blinding’ the researchers conducting the impact assessment to the theories of change and to the project itself to eliminate pro-project bias in questions and answers. This way we hope to gain an overall picture of what events and changes project beneficiaries’ believe have significantly impacted their lives over the time span of the project. This can be cross-checked against separate quantitative monitoring of key indicators. Copestake makes the argument that the bias towards confirmatory approaches may be due to a left brain hemisphere predisposition (http://tinyurl.com/mwr5v9a) – which side of the brain do you fall on?!

  4. Thanks for the reflections, Kirsty. Just like you affirm I believe that tools are simple tools: the challenge is to find the right one (or set of ones) according to your evaluation purposes. Why are we doind this evaluation? In my experience, the genuine answer to this question explains most of its results. Most of time evaluators (I have been one several times now) do not delve deeply into the purposes and the implications of these to those who commission the evaluation. I believe that we would contribute significantly to an evaluation effort if we spend some good time and energy in challenging ourselves at the beginning of any evaluation project to jointly discuss its purposes (even better with the participation of relevant stakeholders) and which is the best path to achieve them (including how to spend the available resources, and even if it´s worth spending them)

    In this sense, I like very much what Fiona is sharing in terms of an exploratory approach which gives power to beneficiaries in terms of finding out about real impact and benefits of interventions so as not to limit ourselves to the expected ones (those valued by us). I look forward to reading what she has shared as well as Copestake´s paper.

  5. Pingback: Unintended consequences: When research impact is bad for development | kirstyevidence

Leave a Reply (go on, you know you want to...)

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s