kirstyevidence

Musings on research, international development and other stuff


6 Comments

12 principles for Payment by Results – the simplified version

football

Meanwhile in Oxford…

Stefan Dercon and Paul Clist recently published this excellent short paper outlining 12 principles to consider before using a Payment by Results (PbR) contract for development programmes. But, as pointed out by @hmryder, it is written in quite technical language. You can’t blame the authors – I mean, they are hardcore economists who probably speak that way when they are watching the football. So I have attempted to translate the paper for fellow simple folk – economists do let me know if I have made any mistakes.

Principle 1: PbR involves paying for something after it has been delivered. Therefore it only works if the implementer has enough money in the first place to pay for the work until they are reimbursed.

Principle 2: If you are going to pay based on results, you need to be able to measure the results. If you choose a proxy indicator (i.e. not the final result you are looking for but something that has to change along the way), you need to make sure that changes in your indicator really suggest that the end result will change too.

Principle 3: Some people will game the system by finding ways to make it seem that they have achieved the results when they actually haven’t. Perhaps more worrying is that if you choose the wrong proxy indicator, it might lead people to concentrate too much on trying to achieve that without trying to achieve the actual end result you are looking for.

Principle 4: Donors shouldn’t use PbR just as a way to reduce their risk, for two reasons. Firstly, donors are actually usually much better able to handle risk than implementing partners. This is because donors tend to be funding lots of projects, so if one or two go wrong, they still know they have others that should work. Implementers, on the other hand, may only have one project so they are likely to be really risk averse. The second reason is that the implementer is already likely to be very susceptible to risk and by transferring the additional risk of potential non-payment, you will probably just make them even more risk averse.

Principle 5: If the thing that you want to achieve is essentially the same as the thing the implementer wants to achieve, PbR may not be that useful. PbR should be used to incentivise implementers to do the thing that you want them to do, and you might be wasting effort if they are already fully incentivised to do that thing anyway.

Principle 6: PbR is useful where it is difficult to measure what the implementers are doing (inputting), and therefore you need to measure what they are achieving. If you can easily measure what they are doing, just do that.

Principle 7: PbR works well when achieving the result you are looking for is actually within the control (more or less) of the implementers. It doesn’t work well when there are loads of factors outside the implementers control which will determine whether the result is achieved.

Principle 8: The biggest extra cost of PbR contracts compared to other contracts is the cost of verifying whether results (or a suitable proxy indicator of results) have been achieved.

Principle 9: There is some evidence that trying to incentivise people who are already very motivated to do something by giving them money can actually backfire – they may feel insulted that you think they need to be paid to do something when actually they want to do it because they think it is the right thing. (I wrote about this a bit here).

Principle 10: Donors need to be honest about the practical constraints they are working under and to be aware when these might get in the way of an effective PbR contract.

Principle 11: You can only judge whether your PbR contract has been sucessful by looking to see whether the end result you were aiming for has actually been achieved. Just showing that a proxy indicator has been achieved is not enough.

Principle 12: Remember that PbR is not the only tool in the box for incentivising performance.

 


9 Comments

Top-down versus bottom-up development: where does evidence fit in?

I recently enjoyed reading this speech by Owen Barder in which he describes his gradual transition from a belief in ‘top-down’-/’pre-fab solutions’-based development to a model based on ‘bottom-up’ struggles to find appropriate solutions to problems.

The importance of the process of finding a solution was really hammered home to me when I did a diploma in management studies a few years back. My favourite module was ‘Organisational Development’ – or OD for short, which turns out to be an entire academic discipline (complete with textbooks, experts and internal factions – who knew?) concerned with how organisations can struggle, innovate and adapt to deal with their own problems and how managers can help to facilitate this process. The philosophy of OD is in sharp contrast to the dictatorial view some have of management where those at the top diagnose problems and forcibly implement solutions. In OD, the aim is to have a ‘healthy organisation’, meaning an organisation with the innate capacity to recognise and respond appropriately to problems.

I loved reading about this stuff. In part it appealed because it chimed with my academic background as an immunologist; immunology – the study of how the body combats dangerous assaults – is all about complex adaptive systems (I wrote about this here in what was probably my most nerdy – and least read ;-) – blog post ever).

But OD also seemed remarkably analogous to discussions about international development. Owen Barder is not alone in pointing out the dichotomy between top-down and bottom-up approaches. It is one of the central themes of development – see table below.

‘Bottom up’ ‘Top down’
What Owen Barder describes as struggle and adaptation (also related to his writings on complexity)… …versus what he calls transplanting best practice
What Ben Ramalingam calls a ‘complex adaptive systems’ approach… …versus what he refers to as a ‘conveyer belt’ approach
Those who William Easterly in ‘White Man’s Burden’ refers to as ‘seekers’ and the ‘spontaneous solutions’ he refers to in ‘The Tyranny of Experts’ …versus those he calls ‘planners’ and their ‘conscious designs’
What Duncan Green calls a ‘complex/systems’ approach… …versus what he sees as the reality of how aid agencies currently work
The (fabulous!) work carried out within DFID by Pete Vowles and Tom Wingfield aiming to make DFID more ‘adaptive’… …versus how everybody fears (with some justification!) that DFID currently manages programmes
Outcome mapping… … versus logframes
The Lego you played with as a kid… …versus a Deluxe Lego Starwars Millennium Falcon play set

Created with the HTML Table Generator

But the question is, where does the concept of evidence-informed policy making (eipm) fit into that table? I suspect that much of the backlash against eipm comes because people associate it with the right-hand column. There is a fear that eipm is synonymous with researchers, mainly from the north, coming up with solutions to problems and then expecting decision makers in the south to accept these even when they are inappropriate to local context. Now I am one of the biggest fans of eipm you could probably find (I mean, look at my surname?!). But I completely distance myself from that definition of eipm. It is for that reason that I am slightly wary of some efforts by researchers to achieve ‘impact’ and ‘policy influence’ with their research findings. It seems that the aim of making sure your research is taken up can be rather too close to the top-down, solutions-based approach.

For me, eipm is not about pushing out more and more research-based solutions. It is about supporting the appropriate decision-makers to consider the appropriate evidence as they are struggling to come up with solutions which are appropriate for them.

In other words, I place myself, and my concept of eipm, firmly on the left-hand column. I recognise the need for struggles, learning, adaptation as local people deal with local problems. I would simply argue that one of the sources of information which can be immensely useful in informing this process is research evidence.

Edit: After I published this a couple of people of twitter pointed out, quite correctly, that the top-down vs bottom-up model is a bit of a false dichotomy. It is a good point. In both management and development projects there is a place for a leader to introduce a new vision/process/way of working and then to work to get colleagues ‘on board’ with it. Rather than strictly saying development needs to be in the left-hand column, it would have been better to suggest that there is a spectrum of approaches and that after many years of being too far on the right, it is time that we move a bit more towards the left-hand side. Thanks as always for the critical comments!


6 Comments

Unintended consequences: When research impact is bad for development

Development research donors are obsessed with achieving research impact and researchers themselves are feeling increasingly pressurised to prioritise communication and influence over academic quality.

To understand how we have arrived at this situation, let’s consider a little story…

Let’s imagine around 20 years ago an advisor in an (entirely hypothetical) international development agency. He is feeling rather depressed – and the reason for this is that despite the massive amount of money that they are putting into international development efforts, it still feels like a Sisyphean task. He is well aware that poverty and suffering are rife in the world and he wonders what on earth to do. Luckily this advisor is sensible and realises that what is needed is some research to understand better the contexts in which they are working and to find out what works.

Fast-forward 10 or so years and the advisor is not much happier. The problem is that lots of money has been invested in research but it seems to just remain on the shelf and isn’t making a significant impact on development. And observing this, the advisor decides that we need to get better at promoting and pushing out the research findings. Thus (more or less!) was born a veritable industry of research communication and impact. Knowledge-sharing portals were established, researchers were encouraged to get out there and meet with decision makers to ensure their findings were taken into consideration, a thousand toolkits on research communications were developed and a flurry of research activity researching ‘research communication’ was initiated.

dfid advisorBut what might be the unintended consequences of this shift in priorities? I would like to outline three case studies which demonstrate why the push for research impact is not always good for development.

First let’s look at a few research papers seeking to answer an important question in development: does decentralisation improve provision of public services. If you were to look at this paper, or this one or even this one, you might draw the conclusion that decentralisation is a bad thing. And if the authors of those papers had been incentivised to achieve impact, they might have gone out to policy makers and lobbied them not to consider decentralisation. However, a rigorous review of the literature which considered the body of evidence found that, on average, high quality research studies on decentralisation demonstrate that it is good for service provision. A similar situation can be found for interventions such as microfinance or Community Driven Development - lots of relatively poor quality studies saying they are good, but high quality evidence synthesis demonstrating that overall they don’t fulfil their promise.

My second example comes from a programme I was involved in a few years ago which aimed to bring researchers and policy makers together. Such schemes are very popular with donors since they appear to be a tangible way to facilitate research communication to policy makers. An evaluation of this scheme was carried out and one of the ‘impacts’ it reported on was that one policy maker had pledged to increase funding in the research institute of one of the researchers involved in the scheme. Now this may have been a good impact for the researcher in question – but I would need to be convinced that investment in that particular research institution happened to be the best way for that policy maker to contribute to development.

My final example is on a larger scale. Researchers played a big role in advocating for increased access to anti-HIV drugs, particularly in Africa. The outcome of this is that millions more people now have access to those drugs, and on the surface of it that seems to be a wholly wonderful thing. But there is an opportunity cost in investment in any health intervention – and some have argued that more benefit could be achieved for the public if funds in some countries were rebalanced towards other health problems. They argue that people are dying from cheaply preventable diseases because so much funding has been diverted to HIV. It is for this reason we have NICE in the UK to evaluate the cost-effectiveness of new treatments.

What these cases have in common is that in each case I feel it would be preferable for decision makers to consider the full body of evidence rather than being influenced by one research paper, researcher or research movement. Of course I recognise that this is a highly complicated situation. I have chosen three cases to make a point but there will be many more cases where researchers have influenced policy on the basis of single research studies and achieved competely positive impacts. I can also understand that a real worry for people who have just spent years trying to encourage researchers to communicate better is that the issues I outline here could cause people to give up on all their efforts and go back to their cloistered academic existence. And in any case, even if pushing for impact were always a bad thing, publically funded donors would still need to have some way to demonstrate to tax payers that their investments in research were having positive effects.

So in the end, my advice is something of a compromise. Most importantly, I think researchers should make sure they are answering important questions, using the methods most suitable to the question. I would also encourage them to communicate their findings in the context of the body of research. Meanwhile, I would urge donors to continue to support research synthesis – to complement their investments in primary research. And to support policy making processes which include consideration of bodies of research.


3 Comments

Can an outsider help solve your problems?

My sister, who knows about these things, tells me that most great innovations happen when someone from one sector/area of expertise moves to a new sector/area of expertise and introduces a new way of dealing with a problem.

Face-palm moment

Face-palm moment

This kind of surprises me – my experience is that when new people arrive in my sector, they quite often make lots of the same mistakes that those of us who have been around for a while have long ago tried and discarded. But my sister’s revelation made me wonder whether this slightly negative attitude towards newbies is doing me harm? Is my snootiness depriving me of lots of valuable opportunities to learn?

The answer is probably yes, but I think ‘outsider’ input into problem solving does need to be well managed. It is possible that someone with a new perspective will identify a fabulous and innovative new way to solve a problem – but there is also a high risk that they will jump to the same naive assumptions that you used to have before you became so jaded I mean… experienced.

So here are my top tips to both sides of equation – and, as usual, my advice is gathered from my own experience of messing this type of thing up!

If you are the highly experienced expert who is getting some ‘outsider’ perspective….

1. Stop being so bloomin’ grumpy! Yes of course you know lots about this and of course the outsider will appear ignorant – but if you can attempt to engage with them enthusiastically – even gratefully – and provide evidence for why certain ideas might not work (rather than rolling your eyes!) you might well get a useful new perspective.

2. Build your credibility as an expert by summarising important bodies of knowedge that you have learnt from – including your own experiences, books, experts, research evidence etc. This will be more helpful and more persuasive that just expecting people to realise that you know best (even if you do!).

3. Don’t be afraid to pinpoint parts of the problem which you already feel well-placed to solve – and other parts where you would welcome some input.

If you are the bright-eyed bushy tailed outsider who has been brought in to advise…

1. Make sure it is clear that you want to listen – this usually reduces people’s resistance. And try to spend as much time understanding what the problem is that people are trying to solve before jumping in with solutions. I find the ‘Action Learning’ approach really useful for forcing you to stop trying to solve a problem before you actually really understand it.

2. Be respectful to people’s knowledge and experience and take the time to listen to how they think the problem should be solved (even if they do seem grumpy!). You may eventually decided to provide constructive challenge to their proposed solutions, but this will never be effective unless you really understand why they are proposing them.

3. Repeatedly  invite the experts to challenge any new ideas you have – and develop a thick skin!

.

And, just in case none of this works, you may also want to check out this post on dealing with disagreements…!


4 Comments

Make love not war: bringing research rigour and context together

I’ve just spent a few days in Indonesia having meetings with some fascinating people discussing the role of think tanks in supporting evidence-informed policy. It was quite a privilege to spend time with people who had such deep and nuanced understanding of the ‘knowledge sectors’ in different parts of the world (and if you are interested in learning more, I would strongly recommend you check out some of their blogs here, here and here).

However, one point of particular interest within the formal meetings was that research quality/rigour often seemed to be framed in opposition to considerations of relevance and context. I was therefore interested to see that Lant Pritchett has also just written a blog with essentially the same theme – making the point that research rigour is less important than contextual relevance.

I found this surprising – not because I think context is unimportant – but because I do not see why the argument needs to be dichotomous. Research quality and research relevance are two important issues and the fact that some research is not contextually relevant does not in any way negate the fact that some research is not good quality.

How not to move a discussion forward

To illustrate this, let’s consider a matrix comparing quality with relevance.

Low Quality High Quality
Low contextual understanding The stuff which I think we can all agree is pointless Rigorous research which is actually looking at   irrelevant/inappropriate questions due to poor understanding of context
High contextual understanding Research which is based on deep understanding of context   but which is prone to bias due to poor methodology The good stuff! Research which is informed by good contextual understanding and which uses high quality methods to investigate   relevant questions.

Let me give some examples from each of these categories:

Low quality low contextual understanding

I am loath to give any examples for this box since it will just offend people – but I would include in this category any research which involves a researcher with little or no understanding of the context ‘parachuting in’ and then passing off their opinions as credible research.

High quality, low contextual understanding

An example of this is here – a research study on microbicides to prevent the transmission of HIV which was carried out in Zambia. This research used an experimental methodology – the most rigorous approach one can use when seeking to prove causal linkages. However the qualitatitve research strand which was run alongside the trial demonstrated that due to poor understanding of sexual behaviours in the context they were working in, the experimental data were flawed.

Low quality, high contextual understanding

An example of this is research to understand the links between investment in research and the quality of university education which relies on interviews and case studies with academics. These academics have very high understanding of the context of the university sector and you can therefore see why people would choose to ask them this questions. However repeated studies show that academics almost universally believe that investment in research is crucial to drive up the quality of education within universities while repeated rigorous empirical studies, reveal that the relationship between research and education quality is actually zero.

High quality, high contextual understanding

An example here could be this set of four studies of African policy debates. The author spent extended periods of time in each location and made every effort to understand the context – but she also used high quality qualitative research methods to gather her data. Another example could be the CDD paper I have blogged about before where an in-depth qualitative approach to understand context was combined with a synthesis of high-quality experimental research evidence. Or the research described in this case study – an evaluation carried out in Bolivia which demonstrates how deep contextual understanding and research rigour can be combined to achieve impact.

Some organisations will be really strong on relevance but be producing material which is weak methodologically and therefore prone to bias. This is dangerous since – as described above – poor quality research may well give answers – but they may be entirely the wrong answers to the questions posed. Other organisations will be producing stuff which is highly rigorous but completely irrelevant. Again, this is at best pointless and at worst dangerous if decision makers do not recognise that it is irrelevant to the questions they are grappling with.

In fact, the funny thing is that when deciding whether to concentrate more on improving research relevance or research quality… context matters! The problem of poor quality and the problem of low contextual relevance both occur and both reduce the usefulness of the research produced – and arguing about which one is on average more damaging is not going to help improve that situation.

One final point that struck me from reading the Pritchett blog is that he appears to have a fear that a piece of evidence which shows that something works in one context will be mindlessly used to make the argument that the same intervention should be used in another. In other words, there is a concern that rigorous evidence will be used to back up normative policy advice. If evidence were to be used in that way, I would also be afraid of it – but that is fundamentally not what I consider to be evidence-informed policy making. In fact, I disagree that any research evidence ever tells anyone what they should do. Thus, I agree with Pritchett that evidence of the positive impact of low class sizes in Israel does not provide the argument that class sizes should be lowered in Kenya. But I would also suggest that such evidence does not necessarily mean that policy makers in Israel should lower class sizes. This evidence provides some information which policy makers in either context may wish to consider – hence evidence-informed policy making. The Israeli politicians may come to the conclusion that the evidence of the benefit of low class sizes is relatively strong in their context. However, they may well make a decision not to lower class sizes due to other factors – for example finances. I would still consider this decision to be evidence-informed. Conversely, the policy makers in Kenya may look at the Israeli evidence and conclude that it refers to a different context and that it may therefore not provide a useful prediction of what will happen in Kenya – however, they may decide that it is sufficient to demonstrate that in some contexts lower class sizes can improve outcomes and that that is sufficient evidence for them to take a decision to try the policy out.

In other words, political decisions are always based on multiple factors – evidence will only ever be one of them. And evidence from alternative contexts can still provide useful information – providing you don’t overinterpret that information and assume that something that works in one context will automatically transfer to another.


7 Comments

Results based aid

I was recently asked for my opinion on the links between two common concepts in development: results-based aid (RBA) and evidence-informed policy making. It isn’t something I had really considered before, but the more I thought about it, the more I came to the conclusion that these concepts are very different – and the fact that they are often considered to be related is a bit of a worry.

RBA (a general term I will use here to cover various different mechanisms for paying for development interventions on the basis of outputs/outcomes, not inputs) is a mechanism which relies on the ability to measure and attribute impact in order to trigger payments. In other words,  you make a decision (on whether to pay) based only on robust evidence of impact. As I have argued quite a few times before (e.g. here and here), evidence-informed policy is all about using a wide variety of evidence to inform your decisions – while acknowledging that the evidence will always be incomplete and that many other factors will also influence you.  In this sense evidence-informed policy is quite different to RBA because although it concerns making a decisions based on evidence – it implies a much broader scope of evidence.

I am not saying this in order to criticise RBA. I think it can be a really useful tool and I am delighted to see some really innovative thinking about how RBA can be used to drive better development outcomes. There is some great writing from Nancy Birdsall and colleagues here on the topic which I highly recommend taking a look at.

But my concern about RBA is that it is sometimes applied to projects where it is not appropriate or, worse, that in the future projects will only be funded if they are ‘RBA-able’. I would suggest that to determine whether RBA is appropriate for a given intervention, you need to ask yourself the following questions:

1 Do you know what the problem is?

2 Do you know what the solution to the problem is?

3 Are you confident that the supplier/implementor will be free to implement the solution (i.e. that achieval or non-achieval of the outcome is broadly within their control)?

4 Is the supplier/implementor extrinsically motivated (i.e. incentivised by money)?

carrotWhere the answer is yes to these questions, RBA may be a good contracting approach since it will help incentivise the supplier to put their effort into achieving the outcomes you are interested in. Examples might include contracting a commercial company to build a bridge (where there is a clear demand for these interventions from local decision makers) or providing funds to a developing country government for achieving certain measurable health outcomes.

However, I am sure it has occurred to you that many development projects do not fit this mold.

Let me give an example. Some years ago I was involved in a project to support the use of research evidence in the parliament of a country which I will call Zalawia. We recognised that what the staff of the Parliament of Zalawia did not need was more parachuted-in northern experts to give one-off  training workshops on irrelevant topics – they needed support in some basic skills (particularly around using computers to find information), ideally delivered by someone who understood the context and could provide long-term support. So, we supported a link between the parliament and one of the national universities. We identified one university lecturer, let’s call him Dr Phunza, who had a real interest in use of evidence and we supported him to develop and set up a capacity building scheme for the parliament. Our support included providing Dr Phunza with intense training and mentoring in effective pedagogy, providing funds for his scheme and helping him to secure buy-in from the head of research and information in the Parliament. A number of meetings and phone calls took place between Dr Phunza and staff in the parliament over many months and eventually a date was set for the first of a series of training sessions in ‘Finding and Using Online Information’. Dr Phunza developed a curriculum for the course and engaged the services of a co-facilitator. However, when the day arrived, none of the parliamentary staff who were expected to turn up did so – at the last minute they had been offered a higher per diem to attend an alternative meeting so they went there.

So, what would have happened if we had been on a result-based contract with our funders? Well essentially, we would have put all our efforts in, taken up a lot of our time and energy, and spent our funds on transport, room hire etc. and yet we would presumably not have been paid since we didn’t achieve the outcome we had planned. I have worked in many policy making insitutions on projects to support use of evidence I can say that the situation described with Zalawia was in no way unusual. In fact if we had been pushed to use a RBA model for that project, given our knowledge of the inherent difficulty of working with Parliaments, our incentive from the outset would have been to set up a project with a much more achievable outcome – even if we knew it would have much less impact.

So let’s go back to my four questions and apply them to this project…

1. Did we know what the problem was? – well yes, I would say we were pretty clear on that.

2 Did we know what the solution to the problem was? – hmm, not really. We had some ideas that were informed by past experiences – but I think that we still had quite a bit to learn. The issue was that there was no real evidence base on ‘what works’ in the setting we were working in so the only way to find out was trial and (quite a lot of) error.

3 Were we free to implement the solution? – absolutely not! We were completely dependent on the whims of the staff and in particular the senior management of the parliament in question.

4 Were we incentivised by money? – no, not really. I was working for a non-profit organisation and Dr Phunza was a University lecturer. If money had been withheld it would just have meant that some of the other activities we were planning would not have been possible. I suspect that I still would have found funds, even if it was from my own pocket, to pay Dr Phunza.

The other thing that is worth saying is that, given how hard both myself and Dr P worked to get the project running, I think we would have found it quite insulting and demotivating to be told that we would only be paid if we were sucessful – it would have seemed rather rude to us to imply that we would have needed financial incentives in order for us to bother trying!

In other words, I don’t think this type of project would be suitable for RBA. There are many risks inherent in funding such a project but a major one is not that the implementer would not bother trying – and thus the risk mitigation strategy of RBA would be unnecessary – and potentially damaging.

Does this mean that I think our donors should have just left us alone to get on with our project? Absolutely not! I am well aware that many development actors spend many years working hard at interventions they truly believe in which are, in fact, pointless or even damaging. So I am not suggesting that donors should just let people do what they like so long as they are well-intentioned. However, I think we need to choose mechanisms for scrutiny and incentivising that fit the particular aims and context of the development programme in question. And where we don’t have good mechanisms to hand, we need to continue to innovate to develop systems that help us achieve the impacts we seek.

UPDATE: After writing this blog I have had quite a few interesting discussions on this topic which have moved my thinking on a bit.  In particular, Owen Barder gave some useful comments via twitter. What I took from that discussion (if I understood correctly) was that in the case I gave, RBA could still have been used but that some sort of ‘risk premium’ would have to have been built into the payment schedule – i.e. the donor would have had to have added some extra funds to each payment above and beyond the actuals cost. He also took issue with the fact that I was saying that implementers were not incentivised by money – he said if this were really the case would implementers spend so much time trying to please donors – a fair point! So perhaps combining a risk premium with PBR would ensure that the implementer was still incentivised to deliver by paying on results but would also mean that they were able to mitigate against the risk that one or more milestone was not met? This still leaves me with some unanswered questions – one issue is how do you work out the extent of the risk in new and novel programmes? Another point that an organisation who are on a milestone-based contract  is that they find it reduces their opportunity for innovation – they are tied down to delivering certain milestones and if they realise that better development impact could be achieved by doing something they had not predicted at the time the contract was negotiated, this is difficult. So in summary, this is a complicated topic with far more pros and cons that I probably realised when I started writing! But am grateful to people for continuing to educate me!


Leave a comment

Should we be worried about policy makers’ use of evidence?

A couple of papers have come out this week on policy makers’ use of evidence.

policy makers

Policy makers are apparently floating around in their own little bubbles – but should this be a cause for concern?

The first is a really interesting blog by Mark Chataway, a consultant who has spent recent months interviewing policy makers (thanks to @PrachiSrivas for sharing this with me). His conclusion after speaking to a large number of global health and development policy makers, is that most of them live in a very small bubble. They do not read widely and instead rely on information shared with them via twitter, blogs or email summaries.

The blog is a good read – and I look forward to reading the full report when it comes out – but I don’t find it particularly shocking and actually, I don’t find it particularly worrying.

No policymaker is going to be able to keep abreast of all the new research findings in his/her field of interest. Even those people who do read some of the excellent specialist sources mentioned in the article will only ever get a small sample of the new information that is being generated. In fact, trying to prospectively stay informed about all research findings of potential future relevance is an incredibly inefficient way to achieve evidence-informed decision-making. For me, a far more important question is whether decision makers  access, understand and apply relevant research knowledge at the point at which an actual decision is being made.

Enter DFID’s first ever Evidence Survey – the results of which were published externally this week.

This survey (which I hear was carried out by a particularly attractive team of DFID staff) looked at a sample of staff across grades (from grade ‘B1d to SCS’ in case that means anything to you..) and across specialities.

So, should we be confident about DFID staff’s use of evidence?

Well, partly…

The good news is that DFID staff seem to value evidence really highly. In fact, as the author of the report gloats, there is even evidence that DFID values evidence more than the World Bank (although if you look closely you will see this is a bit unfair to our World Bank colleagues since the questions asked were slightly different).

And there was recognition that the process for getting new programmes approved does require staff to find and use evidence. The DFID business case requires staff to analyse the evidence base which underlies the ‘strategic need’ and the evidence which backs up different options for intervening. Guidance on how to assess evidence is provided. The business case is scrutinised by a chain of managers and eventually a government minister. Controversial or expensive (over £40m) business cases have an additional round of scrutiny from the internal Quality Assurance Unit.

Which is all great…

But one problem which is revealed by the Evidence Survey, and by recent internal reviews of DFID process, is that there is a tendency to forget about evidence once a programme is initiated. Anyone who has worked in development knows that we work in complex and changing environments and that there is usually not clear evidence of ‘what works’. For this reason it is vital that development organisations are able to continue to gather and reflect on emerging evidence and adapt to optimise along the way.

A number of people on Twitter have also picked up on the fact that a large proportion of DFID staff failed some of the technical questions – on research methodologies, statistics etc. Actually, this doesn’t worry me too much since most of the staff covered by the survey will never have any need to commission research or carry out primary analysis. What I think is more important is whether staff have access to the right levels of expertise at the times when they need it. There were some hints that staff would welcome more support and training so that they were better equipped to deal with evidence.

A final area for potential improvement would be on management prioritisation of evidence. Encouragingly, most staff felt that evidence had become more of a priority over recent years – but they also tended to think that they valued evidence more than their managers did – suggesting a continued need for managers to prioritise this.

So, DFID is doing well in some areas, but clearly has some areas it could improve on. The key for me will be to ensure there are processes, incentives and capacity to incorporate evidence at all key decision points in a programme cycle. From the results of the survey, it seems that a lot of progress has been made and I for one am excited to try to get even better.

Follow

Get every new post delivered to your Inbox.

Join 1,757 other followers