kirstyevidence

Musings on research, international development and other stuff

Beneficiary feedback: necessary but not sufficient?

3 Comments

One of the things I love about working in DFID is that people take the issue of beneficiary* feedback very seriously. Of course we don’t get it right all the time. But I like to think that the kind of externally designed, top-down, patronising solutions that are such a feature of the worst kind of development interventions (one word: BandAid**) are much less likely to be supported by the likes of DFID these days.

In fact, beneficiary feedback is so central to how we do our work that criticising it in any way can been seen as controversial; some may see it as tantamount to saying you hate poor people! So just to be clear, I think we can all agree that getting feedback from the people you are trying to help is a good thing. But we do need to be careful not to oversell what it can tell us. Here are a couple of notes of caution:

1. Beneficiary feedback may not be sufficient to identify a solution to a problem

problem cakeIt is of course vital to work with potential beneficiaries when designing an intervention to ensure that it actually meets their needs. However, it is worth remembering that what people tell you they need may not match what they will actually benefit from. Think about your own experience – are you always the best placed person to identify the solution to your problems? Of course not – because we don’t know what we don’t know. It is for that reason that you consult with others – friends, doctors, tax advisors etc. to help you navigate your trickiest problems.

I have come across this problem frequently in my work with policy making institutions (from the north and the south) that are trying to make better use of research evidence. Staff often come up with ‘solutions’ which I know from (bitter) experience will never work. For example, I often hear policy making organisations  identify that what they need is a new interactive knowledge-sharing platform – and I have also watched on multiple occasions as such a platform has been set up and has completely flopped because nobody used it.

2. Beneficiary feedback on its own won’t tell you if an intervention has worked

Evaluation methodologies – and in particular experimental and quasi-experimental approaches – have been developed specifically because just asking someone if an intervention has worked is a particularly inaccurate way to judge its effectiveness! Human beings are prone to a whole host of biases – check out this wikipedia entry for more biases than you ever realised existed. Of course, beneficiary feedback can and should form part of an evaluation but you need to be careful about how it is gathered – asking a few people who happen to be available and willing to speak to you is probably not going to give you a particularly accurate overview of user experience. The issue of relying on poorly sampled beneficiary feedback was at the centre of some robust criticisms of the Independent Commission for Aid Impact’s recent review of anti-corruption interventions – see Charles Kenny’s excellent blog on the matter here.

If you are trying to incorporate beneficiary feedback into a rigorous evaluation, a few questions to ask are: Have you used a credible sampling framework to select those you get feedback from? If not, there is a very high chance that you have got a biased sample – like it or not, the type of person who will end up being easily accessible to you as a researcher will tend to be an ‘elite’ in some way. Have you compared responses in your test group with responses from a group which represents a counterfactual situation? If not, you are at high risk of just capturing social desirability bias (i.e. the desire of those interviewed to please the interviewer). If gathering feedback using a translator, are you confident that the translator is accurately translating both what you are asking and the answers you get back? There are plenty of examples of translators who, in a misguided effort to help researchers, put their own ‘spin’ on the questions and/or answers.

Even once you have used a rigorous methodology to collect your beneficiary feedback, it may not be enough to tell the whole story. Getting feedback from people will only ever tell you about their perception of success. In many cases, you will also need to measure some more objective outcome to find out if an intervention has really worked. For example, it is common for people to conclude their capacity building intervention has worked because people report an increase in confidence or skills. But people’s perception of their skills may have little correlation with more objective tests of skill level. Similarly, those implementing behaviour change interventions may want to check if there has been a change in perceptions – but they can only really be deemed successful if an actual change in objectively measured behaviour is observed.

.

I guess the conclusion to all this is that of course it is important to work with the people you are trying to help both to identify solutions and to evaluate their success. But we also need to make sure that we don’t fetishise beneficiary feedback and as a result ignore the other important tools we have for making evidence-informed decisions.

.

* I am aware that ‘beneficiary’ is a problematic term for some people. Actually I also don’t love it – it does conjure up a rather paternalistic view of development. However, given that it is so widely used, I am going to stick with it for this blog. Please forgive me.

** I refuse to provide linklove to Bandaid but instead suggest you check out this fabulous Ebola-awareness song featured on the equally fabulous Africaresponds website.

 

3 thoughts on “Beneficiary feedback: necessary but not sufficient?

  1. Very late in catching up but, here goes. I agree the general approach to BF has been to make it a fetish …or some form of fairy dust sprinkled on our actual work.
    But here’s the thing though…..I struggle with the underlying premise of BF not being enough to judge the success of an intervention, or people not knowing what they don’t know. It’s like we (aid agencies) come in backed with our experience and knowledge, and understanding (all necessary) and basically allow people to comment on what we already know we are going to do. We have to account for the money of course and there are set ways of doing that. It feels a bit like taking your kids out for ice-cream and letting them choose the flavours…heaven forbid they choose cake. It’s a hot day, they need to cool off, you’ve pencilled in family time, and there is a standing list of treats for the week calculated for their nutritional value.
    Which brings us to the “perception of success” and whether the venture has “really worked” this depends on who is defining working and what they value (the kids may not entirely term the ice-cream option a success but what do they know…who doesn’t like ice-cream?)

    I have a point I promise. The issue I think is not BF in itself. It is that although we are in sympathy with the principle, we are grappling with how to carry it out. Your blog points out the weaknesses of the how, and cleaves to the familiar methods that we know work. I propose BF looks at more than the opinion polling and mechanisms checklist, but looks at the underlying reality: – it is not their money and whoever’s money it is defines success, what it looks like, and what evidence is sufficient. This is why for all the feedback in the world, logical frameworks do not change (and their means of verification)

    So you are correct in that current BF methods alone are not sufficient and I dont think that we are saying we need to have one (BF) or the other (existing eval. methods) exclusively. I think BF calls for much more than that and we (ngos) are just now starting to flirt with the idea that what people know is key for their context/situation and their perception of success could matter more that the strategic mandates of donor funds. When the former happens we can then use our evaluation methodologies to our hearts’ content !

  2. OMG, you and I appear to share a brain. I wrote this for Alliance mag about ‘is beneficiary feedback a substitute for RCTs?’ (no, obviously not).
    http://giving-evidence.com/2015/06/02/feedbackrcts/

Leave a Reply (go on, you know you want to...)