kirstyevidence

Musings on research, international development and other stuff


5 Comments

Unintended consequences: When research impact is bad for development

Development research donors are obsessed with achieving research impact and researchers themselves are feeling increasingly pressurised to prioritise communication and influence over academic quality.

To understand how we have arrived at this situation, let’s consider a little story…

Let’s imagine around 20 years ago an advisor in an (entirely hypothetical) international development agency. He is feeling rather depressed – and the reason for this is that despite the massive amount of money that they are putting into international development efforts, it still feels like a Sisyphean task. He is well aware that poverty and suffering are rife in the world and he wonders what on earth to do. Luckily this advisor is sensible and realises that what is needed is some research to understand better the contexts in which they are working and to find out what works.

Fast-forward 10 or so years and the advisor is not much happier. The problem is that lots of money has been invested in research but it seems to just remain on the shelf and isn’t making a significant impact on development. And observing this, the advisor decides that we need to get better at promoting and pushing out the research findings. Thus (more or less!) was born a veritable industry of research communication and impact. Knowledge-sharing portals were established, researchers were encouraged to get out there and meet with decision makers to ensure their findings were taken into consideration, a thousand toolkits on research communications were developed and a flurry of research activity researching ‘research communication’ was initiated.

dfid advisorBut what might be the unintended consequences of this shift in priorities? I would like to outline three case studies which demonstrate why the push for research impact is not always good for development.

First let’s look at a few research papers seeking to answer an important question in development: does decentralisation improve provision of public services. If you were to look at this paper, or this one or even this one, you might draw the conclusion that decentralisation is a bad thing. And if the authors of those papers had been incentivised to achieve impact, they might have gone out to policy makers and lobbied them not to consider decentralisation. However, a rigorous review of the literature which considered the body of evidence found that, on average, high quality research studies on decentralisation demonstrate that it is good for service provision. A similar situation can be found for interventions such as microfinance or Community Driven Development - lots of relatively poor quality studies saying they are good, but high quality evidence synthesis demonstrating that overall they don’t fulfil their promise.

My second example comes from a programme I was involved in a few years ago which aimed to bring researchers and policy makers together. Such schemes are very popular with donors since they appear to be a tangible way to facilitate research communication to policy makers. An evaluation of this scheme was carried out and one of the ‘impacts’ it reported on was that one policy maker had pledged to increase funding in the research institute of one of the researchers involved in the scheme. Now this may have been a good impact for the researcher in question – but I would need to be convinced that investment in that particular research institution happened to be the best way for that policy maker to contribute to development.

My final example is on a larger scale. Researchers played a big role in advocating for increased access to anti-HIV drugs, particularly in Africa. The outcome of this is that millions more people now have access to those drugs, and on the surface of it that seems to be a wholly wonderful thing. But there is an opportunity cost in investment in any health intervention – and some have argued that more benefit could be achieved for the public if funds in some countries were rebalanced towards other health problems. They argue that people are dying from cheaply preventable diseases because so much funding has been diverted to HIV. It is for this reason we have NICE in the UK to evaluate the cost-effectiveness of new treatments.

What these cases have in common is that in each case I feel it would be preferable for decision makers to consider the full body of evidence rather than being influenced by one research paper, researcher or research movement. Of course I recognise that this is a highly complicated situation. I have chosen three cases to make a point but there will be many more cases where researchers have influenced policy on the basis of single research studies and achieved competely positive impacts. I can also understand that a real worry for people who have just spent years trying to encourage researchers to communicate better is that the issues I outline here could cause people to give up on all their efforts and go back to their cloistered academic existence. And in any case, even if pushing for impact were always a bad thing, publically funded donors would still need to have some way to demonstrate to tax payers that their investments in research were having positive effects.

So in the end, my advice is something of a compromise. Most importantly, I think researchers should make sure they are answering important questions, using the methods most suitable to the question. I would also encourage them to communicate their findings in the context of the body of research. Meanwhile, I would urge donors to continue to support research synthesis – to complement their investments in primary research. And to support policy making processes which include consideration of bodies of research.


3 Comments

Can an outsider help solve your problems?

My sister, who knows about these things, tells me that most great innovations happen when someone from one sector/area of expertise moves to a new sector/area of expertise and introduces a new way of dealing with a problem.

Face-palm moment

Face-palm moment

This kind of surprises me - my experience is that when new people arrive in my sector, they quite often make lots of the same mistakes that those of us who have been around for a while have long ago tried and discarded. But my sister’s revelation made me wonder whether this slightly negative attitude towards newbies is doing me harm? Is my snootiness depriving me of lots of valuable opportunities to learn?

The answer is probably yes, but I think ‘outsider’ input into problem solving does need to be well managed. It is possible that someone with a new perspective will identify a fabulous and innovative new way to solve a problem – but there is also a high risk that they will jump to the same naive assumptions that you used to have before you became so jaded I mean… experienced.

So here are my top tips to both sides of equation – and, as usual, my advice is gathered from my own experience of messing this type of thing up!

If you are the highly experienced expert who is getting some ‘outsider’ perspective….

1. Stop being so bloomin’ grumpy! Yes of course you know lots about this and of course the outsider will appear ignorant – but if you can attempt to engage with them enthusiastically – even gratefully – and provide evidence for why certain ideas might not work (rather than rolling your eyes!) you might well get a useful new perspective.

2. Build your credibility as an expert by summarising important bodies of knowedge that you have learnt from – including your own experiences, books, experts, research evidence etc. This will be more helpful and more persuasive that just expecting people to realise that you know best (even if you do!).

3. Don’t be afraid to pinpoint parts of the problem which you already feel well-placed to solve – and other parts where you would welcome some input.

If you are the bright-eyed bushy tailed outsider who has been brought in to advise…

1. Make sure it is clear that you want to listen – this usually reduces people’s resistance. And try to spend as much time understanding what the problem is that people are trying to solve before jumping in with solutions. I find the ‘Action Learning’ approach really useful for forcing you to stop trying to solve a problem before you actually really understand it.

2. Be respectful to people’s knowledge and experience and take the time to listen to how they think the problem should be solved (even if they do seem grumpy!). You may eventually decided to provide constructive challenge to their proposed solutions, but this will never be effective unless you really understand why they are proposing them.

3. Repeatedly  invite the experts to challenge any new ideas you have – and develop a thick skin!

.

And, just in case none of this works, you may also want to check out this post on dealing with disagreements…!


2 Comments

Make love not war: bringing research rigour and context together

I’ve just spent a few days in Indonesia having meetings with some fascinating people discussing the role of think tanks in supporting evidence-informed policy. It was quite a privilege to spend time with people who had such deep and nuanced understanding of the ‘knowledge sectors’ in different parts of the world (and if you are interested in learning more, I would strongly recommend you check out some of their blogs here, here and here).

However, one point of particular interest within the formal meetings was that research quality/rigour often seemed to be framed in opposition to considerations of relevance and context. I was therefore interested to see that Lant Pritchett has also just written a blog with essentially the same theme – making the point that research rigour is less important than contextual relevance.

I found this surprising – not because I think context is unimportant – but because I do not see why the argument needs to be dichotomous. Research quality and research relevance are two important issues and the fact that some research is not contextually relevant does not in any way negate the fact that some research is not good quality.

How not to move a discussion forward

To illustrate this, let’s consider a matrix comparing quality with relevance.

Low Quality High Quality
Low contextual understanding The stuff which I think we can all agree is pointless Rigorous research which is actually looking at   irrelevant/inappropriate questions due to poor understanding of context
High contextual understanding Research which is based on deep understanding of context   but which is prone to bias due to poor methodology The good stuff! Research which is informed by good contextual understanding and which uses high quality methods to investigate   relevant questions.

Let me give some examples from each of these categories:

Low quality low contextual understanding

I am loath to give any examples for this box since it will just offend people – but I would include in this category any research which involves a researcher with little or no understanding of the context ‘parachuting in’ and then passing off their opinions as credible research.

High quality, low contextual understanding

An example of this is here - a research study on microbicides to prevent the transmission of HIV which was carried out in Zambia. This research used an experimental methodology – the most rigorous approach one can use when seeking to prove causal linkages. However the qualitatitve research strand which was run alongside the trial demonstrated that due to poor understanding of sexual behaviours in the context they were working in, the experimental data were flawed.

Low quality, high contextual understanding

An example of this is research to understand the links between investment in research and the quality of university education which relies on interviews and case studies with academics. These academics have very high understanding of the context of the university sector and you can therefore see why people would choose to ask them this questions. However repeated studies show that academics almost universally believe that investment in research is crucial to drive up the quality of education within universities while repeated rigorous empirical studies, reveal that the relationship between research and education quality is actually zero.

High quality, high contextual understanding

An example here could be this set of four studies of African policy debates. The author spent extended periods of time in each location and made every effort to understand the context – but she also used high quality qualitative research methods to gather her data. Another example could be the CDD paper I have blogged about before where an in-depth qualitative approach to understand context was combined with a synthesis of high-quality experimental research evidence. Or the research described in this case study – an evaluation carried out in Bolivia which demonstrates how deep contextual understanding and research rigour can be combined to achieve impact.

Some organisations will be really strong on relevance but be producing material which is weak methodologically and therefore prone to bias. This is dangerous since – as described above – poor quality research may well give answers – but they may be entirely the wrong answers to the questions posed. Other organisations will be producing stuff which is highly rigorous but completely irrelevant. Again, this is at best pointless and at worst dangerous if decision makers do not recognise that it is irrelevant to the questions they are grappling with.

In fact, the funny thing is that when deciding whether to concentrate more on improving research relevance or research quality… context matters! The problem of poor quality and the problem of low contextual relevance both occur and both reduce the usefulness of the research produced – and arguing about which one is on average more damaging is not going to help improve that situation.

One final point that struck me from reading the Pritchett blog is that he appears to have a fear that a piece of evidence which shows that something works in one context will be mindlessly used to make the argument that the same intervention should be used in another. In other words, there is a concern that rigorous evidence will be used to back up normative policy advice. If evidence were to be used in that way, I would also be afraid of it – but that is fundamentally not what I consider to be evidence-informed policy making. In fact, I disagree that any research evidence ever tells anyone what they should do. Thus, I agree with Pritchett that evidence of the positive impact of low class sizes in Israel does not provide the argument that class sizes should be lowered in Kenya. But I would also suggest that such evidence does not necessarily mean that policy makers in Israel should lower class sizes. This evidence provides some information which policy makers in either context may wish to consider – hence evidence-informed policy making. The Israeli politicians may come to the conclusion that the evidence of the benefit of low class sizes is relatively strong in their context. However, they may well make a decision not to lower class sizes due to other factors – for example finances. I would still consider this decision to be evidence-informed. Conversely, the policy makers in Kenya may look at the Israeli evidence and conclude that it refers to a different context and that it may therefore not provide a useful prediction of what will happen in Kenya – however, they may decide that it is sufficient to demonstrate that in some contexts lower class sizes can improve outcomes and that that is sufficient evidence for them to take a decision to try the policy out.

In other words, political decisions are always based on multiple factors – evidence will only ever be one of them. And evidence from alternative contexts can still provide useful information – providing you don’t overinterpret that information and assume that something that works in one context will automatically transfer to another.


6 Comments

Results based aid

I was recently asked for my opinion on the links between two common concepts in development: results-based aid (RBA) and evidence-informed policy making. It isn’t something I had really considered before, but the more I thought about it, the more I came to the conclusion that these concepts are very different – and the fact that they are often considered to be related is a bit of a worry.

RBA (a general term I will use here to cover various different mechanisms for paying for development interventions on the basis of outputs/outcomes, not inputs) is a mechanism which relies on the ability to measure and attribute impact in order to trigger payments. In other words,  you make a decision (on whether to pay) based only on robust evidence of impact. As I have argued quite a few times before (e.g. here and here), evidence-informed policy is all about using a wide variety of evidence to inform your decisions – while acknowledging that the evidence will always be incomplete and that many other factors will also influence you.  In this sense evidence-informed policy is quite different to RBA because although it concerns making a decisions based on evidence – it implies a much broader scope of evidence.

I am not saying this in order to criticise RBA. I think it can be a really useful tool and I am delighted to see some really innovative thinking about how RBA can be used to drive better development outcomes. There is some great writing from Nancy Birdsall and colleagues here on the topic which I highly recommend taking a look at.

But my concern about RBA is that it is sometimes applied to projects where it is not appropriate or, worse, that in the future projects will only be funded if they are ‘RBA-able’. I would suggest that to determine whether RBA is appropriate for a given intervention, you need to ask yourself the following questions:

1 Do you know what the problem is?

2 Do you know what the solution to the problem is?

3 Are you confident that the supplier/implementor will be free to implement the solution (i.e. that achieval or non-achieval of the outcome is broadly within their control)?

4 Is the supplier/implementor extrinsically motivated (i.e. incentivised by money)?

carrotWhere the answer is yes to these questions, RBA may be a good contracting approach since it will help incentivise the supplier to put their effort into achieving the outcomes you are interested in. Examples might include contracting a commercial company to build a bridge (where there is a clear demand for these interventions from local decision makers) or providing funds to a developing country government for achieving certain measurable health outcomes.

However, I am sure it has occurred to you that many development projects do not fit this mold.

Let me give an example. Some years ago I was involved in a project to support the use of research evidence in the parliament of a country which I will call Zalawia. We recognised that what the staff of the Parliament of Zalawia did not need was more parachuted-in northern experts to give one-off  training workshops on irrelevant topics – they needed support in some basic skills (particularly around using computers to find information), ideally delivered by someone who understood the context and could provide long-term support. So, we supported a link between the parliament and one of the national universities. We identified one university lecturer, let’s call him Dr Phunza, who had a real interest in use of evidence and we supported him to develop and set up a capacity building scheme for the parliament. Our support included providing Dr Phunza with intense training and mentoring in effective pedagogy, providing funds for his scheme and helping him to secure buy-in from the head of research and information in the Parliament. A number of meetings and phone calls took place between Dr Phunza and staff in the parliament over many months and eventually a date was set for the first of a series of training sessions in ‘Finding and Using Online Information’. Dr Phunza developed a curriculum for the course and engaged the services of a co-facilitator. However, when the day arrived, none of the parliamentary staff who were expected to turn up did so – at the last minute they had been offered a higher per diem to attend an alternative meeting so they went there.

So, what would have happened if we had been on a result-based contract with our funders? Well essentially, we would have put all our efforts in, taken up a lot of our time and energy, and spent our funds on transport, room hire etc. and yet we would presumably not have been paid since we didn’t achieve the outcome we had planned. I have worked in many policy making insitutions on projects to support use of evidence I can say that the situation described with Zalawia was in no way unusual. In fact if we had been pushed to use a RBA model for that project, given our knowledge of the inherent difficulty of working with Parliaments, our incentive from the outset would have been to set up a project with a much more achievable outcome – even if we knew it would have much less impact.

So let’s go back to my four questions and apply them to this project…

1. Did we know what the problem was? - well yes, I would say we were pretty clear on that.

2 Did we know what the solution to the problem was? – hmm, not really. We had some ideas that were informed by past experiences - but I think that we still had quite a bit to learn. The issue was that there was no real evidence base on ‘what works’ in the setting we were working in so the only way to find out was trial and (quite a lot of) error.

3 Were we free to implement the solution? – absolutely not! We were completely dependent on the whims of the staff and in particular the senior management of the parliament in question.

4 Were we incentivised by money? – no, not really. I was working for a non-profit organisation and Dr Phunza was a University lecturer. If money had been withheld it would just have meant that some of the other activities we were planning would not have been possible. I suspect that I still would have found funds, even if it was from my own pocket, to pay Dr Phunza.

The other thing that is worth saying is that, given how hard both myself and Dr P worked to get the project running, I think we would have found it quite insulting and demotivating to be told that we would only be paid if we were sucessful – it would have seemed rather rude to us to imply that we would have needed financial incentives in order for us to bother trying!

In other words, I don’t think this type of project would be suitable for RBA. There are many risks inherent in funding such a project but a major one is not that the implementer would not bother trying – and thus the risk mitigation strategy of RBA would be unnecessary – and potentially damaging.

Does this mean that I think our donors should have just left us alone to get on with our project? Absolutely not! I am well aware that many development actors spend many years working hard at interventions they truly believe in which are, in fact, pointless or even damaging. So I am not suggesting that donors should just let people do what they like so long as they are well-intentioned. However, I think we need to choose mechanisms for scrutiny and incentivising that fit the particular aims and context of the development programme in question. And where we don’t have good mechanisms to hand, we need to continue to innovate to develop systems that help us achieve the impacts we seek.

UPDATE: After writing this blog I have had quite a few interesting discussions on this topic which have moved my thinking on a bit.  In particular, Owen Barder gave some useful comments via twitter. What I took from that discussion (if I understood correctly) was that in the case I gave, RBA could still have been used but that some sort of ‘risk premium’ would have to have been built into the payment schedule – i.e. the donor would have had to have added some extra funds to each payment above and beyond the actuals cost. He also took issue with the fact that I was saying that implementers were not incentivised by money – he said if this were really the case would implementers spend so much time trying to please donors – a fair point! So perhaps combining a risk premium with PBR would ensure that the implementer was still incentivised to deliver by paying on results but would also mean that they were able to mitigate against the risk that one or more milestone was not met? This still leaves me with some unanswered questions – one issue is how do you work out the extent of the risk in new and novel programmes? Another point that an organisation who are on a milestone-based contract  is that they find it reduces their opportunity for innovation – they are tied down to delivering certain milestones and if they realise that better development impact could be achieved by doing something they had not predicted at the time the contract was negotiated, this is difficult. So in summary, this is a complicated topic with far more pros and cons that I probably realised when I started writing! But am grateful to people for continuing to educate me!


Leave a comment

Should we be worried about policy makers’ use of evidence?

A couple of papers have come out this week on policy makers’ use of evidence.

policy makers

Policy makers are apparently floating around in their own little bubbles – but should this be a cause for concern?

The first is a really interesting blog by Mark Chataway, a consultant who has spent recent months interviewing policy makers (thanks to @PrachiSrivas for sharing this with me). His conclusion after speaking to a large number of global health and development policy makers, is that most of them live in a very small bubble. They do not read widely and instead rely on information shared with them via twitter, blogs or email summaries.

The blog is a good read – and I look forward to reading the full report when it comes out – but I don’t find it particularly shocking and actually, I don’t find it particularly worrying.

No policymaker is going to be able to keep abreast of all the new research findings in his/her field of interest. Even those people who do read some of the excellent specialist sources mentioned in the article will only ever get a small sample of the new information that is being generated. In fact, trying to prospectively stay informed about all research findings of potential future relevance is an incredibly inefficient way to achieve evidence-informed decision-making. For me, a far more important question is whether decision makers  access, understand and apply relevant research knowledge at the point at which an actual decision is being made.

Enter DFID’s first ever Evidence Survey – the results of which were published externally this week.

This survey (which I hear was carried out by a particularly attractive team of DFID staff) looked at a sample of staff across grades (from grade ‘B1d to SCS’ in case that means anything to you..) and across specialities.

So, should we be confident about DFID staff’s use of evidence?

Well, partly…

The good news is that DFID staff seem to value evidence really highly. In fact, as the author of the report gloats, there is even evidence that DFID values evidence more than the World Bank (although if you look closely you will see this is a bit unfair to our World Bank colleagues since the questions asked were slightly different).

And there was recognition that the process for getting new programmes approved does require staff to find and use evidence. The DFID business case requires staff to analyse the evidence base which underlies the ‘strategic need’ and the evidence which backs up different options for intervening. Guidance on how to assess evidence is provided. The business case is scrutinised by a chain of managers and eventually a government minister. Controversial or expensive (over £40m) business cases have an additional round of scrutiny from the internal Quality Assurance Unit.

Which is all great…

But one problem which is revealed by the Evidence Survey, and by recent internal reviews of DFID process, is that there is a tendency to forget about evidence once a programme is initiated. Anyone who has worked in development knows that we work in complex and changing environments and that there is usually not clear evidence of ‘what works’. For this reason it is vital that development organisations are able to continue to gather and reflect on emerging evidence and adapt to optimise along the way.

A number of people on Twitter have also picked up on the fact that a large proportion of DFID staff failed some of the technical questions – on research methodologies, statistics etc. Actually, this doesn’t worry me too much since most of the staff covered by the survey will never have any need to commission research or carry out primary analysis. What I think is more important is whether staff have access to the right levels of expertise at the times when they need it. There were some hints that staff would welcome more support and training so that they were better equipped to deal with evidence.

A final area for potential improvement would be on management prioritisation of evidence. Encouragingly, most staff felt that evidence had become more of a priority over recent years – but they also tended to think that they valued evidence more than their managers did – suggesting a continued need for managers to prioritise this.

So, DFID is doing well in some areas, but clearly has some areas it could improve on. The key for me will be to ensure there are processes, incentives and capacity to incorporate evidence at all key decision points in a programme cycle. From the results of the survey, it seems that a lot of progress has been made and I for one am excited to try to get even better.


3 Comments

Nerds without borders – Justin Sandefur

It’s the last in the series of Nerds without Borders but don’t worry, it’s a good one… it’s only the Centre for Global Development’s JUSTIN SANDEFUR! Find him on twitter as @JustinSandefur

I'm not trying to start rumours*, but has anyone ever seen these two men in the same room??

I’m not trying to start rumours*, but has anyone ever seen these two men in the same room??

1. What flavour of nerdy scientist/researcher are you?I”m an economist.  I’m usually reluctant to call myself a scientist, as I have mixed feelings about the physics-envy that infects a lot of the social sciences.  But for the purposes of your blog series on nerds, I’m happy to play the part.  To play up the nerdy part, I guess you could call me an applied micro-econometrician.  I live amongst the sub-species of economists obsessed with teasing out causation from correlations in statistical data.  In the simplest cases (conceptually, not logistically), that means running randomized evaluations of development projects.

By way of education, I spent far too many years studying economics: masters, doctorate, and then the academic purgatory known as a post-doc.  But my training was pretty hands on, which is what made it bearable.  Throughout grad school I worked at Oxford’s Centre for the Study of African Economies, running field projects in Kenya, Tanzania, Ghana, Liberia, and Sierra Leone on a wide range of topics — from education to land rights to poverty measurement.

 2. What do you do now?

I’m a research fellow at the Center for Global Development (CGD) in Washington, D.C.  CGD is a smallish policy think tank.  If most of development economics can be characterized (perhaps unfairly) as giving poor countries unsolicited and often unwelcome policy advice, CGD tries to turn that lens back around on rich countries and analyze their development policies in areas like trade, climate, immigration, security, and of course aid.

But getting to your question about what I actually do on a day to day basis: a lot of my work looks similar to academic research.  The unofficial CGD slogan on the company t-shirts used to be “ending global poverty, one regression at a time.”  So I still  spend a good chunk of my time in front of Stata running regressions and writing papers.

3. What has research got to do with international development?

That’s a question we spend a lot of time wrestling with at CGD.  Observing my colleagues, I can see a few different models at work, and I’m not sure I’d come down in favor of one over the others.

The first is the “solutionism” model, to use a less-than-charitable name.  I think this is the mental model of how research should inform policy that an increasing number of development economists adhere to.  Researchers come up with new ideas and test promising policy proposals to figure out what will work and what won’t.  Once they have a solution, they disseminate those findings to policymakers who will hopefully adopt their solutions.  Rarely is the world so linear in practice, but it’s a great model in theory.

The second approach is much more indirect, but maybe more plausible.  I’ll call it a framing model for lack of a better term. Research provides the big picture narrative and interpretive framework in which development policymakers make decisions.  Dani Rodrik has a fascinating new paper where he makes the argument that research — “the ideas of some long-dead economist”, as Keynes put it — often trumps vested interests by influencing policymakers’ preferences, shaping their view of how the world works and thus the constraints they feel they face, and altering the set of policy options that their advisers offer them.

My third model of what research has to do with development policymaking is borderline cynical: let’s call it a vetting model.  The result of your narrow little research project rarely provides the answer to any actual policy question.  But research builds expertise, and the peer review publication process establishes the credibility of independent scientific experts in a given field.  And that — rather than specific research results — is often what policymakers are looking for, in development and elsewhere.  Someone who knows what they’re talking about, and is well versed in the literature, and whose credentials are beyond dispute, who can come in and provide expert advice.

I moved to DC as a firm believer in the first model.  CGD gradually pulled me toward the second model.  But when I observe the interface between research and development policymaking in this town, I feel like the third model probably has the most empirical support.

4. What have you been up to recently?

Too many things, but let me pick just one.

This week I’m trying to finally finish up a long overdue paper on the role of development aid during the war in Afghanistan, together with my colleagues Charles Kenny and Sarah Dykstra.  We measure changes over time in aid to various Afghan districts, and look for effects on economic development, public opinion (in favor of the Karzai government and/or the Taliban), and ultimately the level of violence as measured by civilian and military casualties.  To make a long story short: we find some modest bust statistically significant economic return to the billions of dollars spent in aid — even though it was targeted to the most violent, least poor areas.  But we see no effects on either public opinion or violence.

Interestingly, changes over time in public opinion and violence move together quite significantly, in line with some of the basic tenets of counterinsurgency warfare.  But as far as we can measure in Afghanistan, development aid has proven fairly ineffective, on average, at affecting those non-economic outcomes.  Even where households are getting richer and are more satisfied with government services, we see no significant change in support for insurgent groups let alone any decline in violence.

5. What advice would you give to other science types who want to work in development?

People will tell you to get some practical experience, to broaden your interests, to develop your non-research skill set, and so on.  Ignore all that.  Development doesn’t need more smooth-talking development policy experts; development needs world-class experts in specific and often very technical fields.  Follow your research interests and immerse yourself in the content.  If you know what you’re talking about, the rest will fall into place.

 6. Tell us something to make us smile?

I don’t think I believe the advice I just offered under the previous question.  Nor have I really followed it.  But I want to believe it, so hopefully that counts for something.

Thanks Justin – and indeed all my wonderful nerds. It’s been so interesting to hear about everyone’s different career paths and views on research and international development. See the rest of them here: part 1, 2, 3, 4, 5 and 6.

*I am


Leave a comment

If he’s going to rest in peace, we might need to stop squabbling

20070907191032!Nelson_MandelaThe media coverage of Nelson Mandela’s passing has provided us with lots of opportunities to remind ourselves of his wisdom and kindness. But, unfortunately, it has also revealed some rather unedifying behaviour which makes me wonder how much we have really learnt from this great man.

Within a few hours of his death, alongside messages of shock and admiration, there was a plethora of shrill messages that person/group X did not deserve to honour him or that person/group Y was not honouring him correctly or that person/group Z was not honouring him sufficiently.

I mean, just to remind you, this man forgave the people who sent him to prison for twenty-seven years! But in the midst of our admiration for him we proved incapable of forgiving people who posted ill-judged facebook messages.

And lest you think this is a tirade against all the angry people out there, I should add that this episode made me reflect on my own tendency to judge and to hold grudges. I have been known to enter into a rage because my water company failed to fix a broken pipe for a few months, I can enter a dark mood because someone slights me on twitter and I have seriously considered how to plot the downfall of a colleague who stayed in a meeting room beyond her allotted timeslot.

Twenty-seven years people, twenty-seven years.

So, my resolution, and one which I welcome you to join me in, is to try to remember and honour Mandela by trying to be a just a little more kind and a little more forgiving to my fellow humans. To attempt to not assume that others are out to get me and to remember that most people are trying their best.

I’m not sure how long I’ll succeed – but I reckon there can be no harm in trying – so go on, fire some aggressive comments at me and watch for my zen-like reaction.

Follow

Get every new post delivered to your Inbox.

Join 1,648 other followers