kirstyevidence

Musings on research, international development and other stuff


2 Comments

Holding decision makers to account for evidence use

Evidence-informed policy – it’s a wonderful thing. But just how widespread is it? The ‘Show your workings’ report from the Institute of Government (and collaborators Sense About Science and the Alliance for Useful Evidence) has asked this question and concluded… not very. It states “there [are] few obvious political penalties for failing to base decision on the best available evidence”. I have to say that as a civil servant this rings true. It’s not that people don’t use evidence – actually most civil servants, at least where I work, do. But there are not good systems in place to distinguish between people who have systematically looked at the full body of evidence and appraised its strengths and weaknesses – and those who have referenced a few cherry-picked studies to back up their argument.

cat up tree

Rosie is my actual cat’s name. And she does indeed make many poor life decisions. Incidentally, I named my other cat ‘Mouse’ and now that I am trying to teach my child to identify animals I am wondering just how wise a life decision that was…

The problem for those scrutinising decision making – parliament, audit bodies and, in the case of development, the Independent Commission for Aid Impact – is that if you are not a topic expert it can be quite hard to judge whether the picture of evidence presented in a policy document does represent an impartial assessment of the state of knowledge. The IoG authors realised this was a problem quite early in their quest – and came up with a rather nifty solution. Instead of trying to decide if decisions are based on an unbiased assessment of evidence, they simply looked at how transparent decision makers had been about how they had appraised evidence.

Now, on the evidence supply side there has been some great work to drive up transparency. In the medical field, Ben Goldacre is going all guns blazing after pharmaceutical companies to get them to clean up their act. In international development, registers of evaluations are appearing and healthy debates are emerging on the nature of pre-analysis plans. This is vitally important – if evaluators don’t declare what they are investigating and how, it is far too easy for them to not bother publishing findings which are inconvenient – or to try multiple types of analysis until, by chance, one gives them a more agreeable answer.

But as the report shows, and as others have argued elsewhere, there has been relatively little focus on transparency on the ‘demand’ side. And by overlooking this, I think that we might have been missing a trick. You see, it turns out that the extent to which a policy document explicitly sets out how evidence has been gathered and appraised is a rather good proxy for systematic evidence appraisal. And the IoG’s hypothesis is that if you could hold decision makers to account for their evidence transparency, you could go some way towards improving the systematic use of evidence to inform decision makers.

The report sets out a framework which can be used to assess evidence transparency. As usual, I have a couple of tweaks I would love to see. I think it would be great if the framework included more explicitly an assessment of the search strategy used to gather the initial body of evidence – and perhaps rewarded people for making use of existing rigorous synthesis products such as systematic reviews. But in general, I think it is a great tool and I really hope the IoG et al. are successful in persuading government departments – and crucially those who scrutinise them – to make use of it.

 


2 Comments

Impact via infiltration

Two blogs ago I linked to an article which I attributed to “the ever-sensible Michael Clements”. Shortly afterwards @m_clem tweeted:

.@kirstyevidence compares @JustinSandefur to @jtimberlake 🔥—but me, I’m “sensible”.Some guys got it, some don’t. https://t.co/KphJ0ITmr8

I mention this in part because it made me chortle but mainly because it prompted me to look back at that Justin Sandefur interview. And on re-reading it, I was really struck by one of Sandefur’s models for how research relates to policy:

My third model of what research has to do with development policymaking is borderline cynical: let’s call it a vetting model.  The result of your narrow little research project rarely provides the answer to any actual policy question.  But research builds expertise, and the peer review publication process establishes the credibility of independent scientific experts in a given field.  And that — rather than specific research results — is often what policymakers are looking for, in development and elsewhere.  Someone who knows what they’re talking about, and is well versed in the literature, and whose credentials are beyond dispute, who can come in and provide expert advice.

Since that interview was published, I wrote a literature review for DFID which looked at the impact of research on development. And, having spent months of my life scouring the literature, I am more convinced than ever that the Sandefur/Timberlake effect (as it will henceforth be known) is one of the main ways in which investment in research leads to change.

This pathway can be seen clearly in the careers of successful researchers who become policy makers/advisors. For example, within DFID, the chief scientist and chief economist are respected researchers. But the significant impacts they have had on policy decisions within DFID must surely rival the impacts on society they have had via their academic outputs?

And the case may be even stronger if you also examine ‘failed scientists’ – like, for example, me! The UK Medical Research Council invested considerable amounts of funding to support my PhD studies and post-doc career. And I would summarise the societal impact of my research days as… pretty much zilch. I mean, my PhD research was never even published and my post-doc research was on a topic which was niche even within the field of protozoan parasite immunology.

Undercover nerds: creating societal impact all around you?

Undercover nerds – surely the societal impact of all those current and former academics goes beyond their narrow research findings?

In other words, I wouldn’t have to be very influential within development to achieve more impact than I did in my academic career. My successful campaign while working at the Wellcome Trust to get the canteen to stock diet Irn Bru probably surpasses my scientific contributions to society! But more seriously, I do think that the knowledge of research approaches, the discipline of empirical thinking, the familiarity with academic culture – and, on occasion, the credibility of being a ‘Dr’ – have really helped me in my career. Therefore, any positive – or indeed negative – impact that I have had can partly be attributed to my scientific training.

Of course, just looking at isolated individual researchers won’t tell us whether, overall, investment in research leads to positive societal impact – and if so, whether the “S/T effect” (I’m pretty sure this is going to catch on so I have shortened it for ease) is the major route through which that impact is achieved. Someone needs to do some research on this if we are going to figure out if it really is a/the major way in which research impacts policy/practice.

But it’s interesting to note that other people have a similar hypothesis: Bastow, Tinkler and Dunleavy carried out a major analysis of the impact of social science in the UK* and their method for calculating the benefit to society of social science investments was to estimate the amount that society pays to employ individuals with post-grad social science degrees.** In other words they assumed that the major worth of all that investment in social science was not in its findings but in the generation of experts. I think the fact that the authors are experimenting with new methodologies to explain the value of research that go beyond the outdated linear model is fabulous.

But wait, you may be wondering, does any of this matter? Well yes, I think it does because a lot of time and energy are being put into the quest to measure the societal impact of research. And in many cases the impact is narrowly defined as the direct effect of research findings and/or research derived technologies. The recent REF impact case studies did capture more diverse impacts including some that could be classified within the S/T™ effect. But I still get the impression that such indirect effects are seen as secondary and unimportant. The holy grail for research impact still seems to be linear, direct, instrumental impact on policy/practice/the economy – despite the fact that:

  1. This rarely happens
  2. Even when we think this is happening, there is a good chance that evidence is in fact just being used symbolically
  3. Incentivising academics to achieve direct impact with their research results can have unintended and dangerous results

Focussing attention on the indirect impact of trained researchers, not as an unimportant by-product but as a major route by which research can impact society, is surely an important way to get a more accurate understanding of the benefits (or lack thereof) of research funding.***

So, in summary, I think we can conclude that that Justin Sandefur is quite a sensible bloke.

And, by the way, have any of you noticed how much Michael Clemens resembles George Clooney?

.

* I have linked to their open access paper on the study but I also recommend their very readable book which covers it in more detail along with loads of other interesting research – and some fab infographics.

** Just to be pedantic, I wonder if their methodology needs to be tweaked slightly – they have measured value as the cost of employing social science post-grad degree holders but surely those graduates have some residual worth beyond their research training? I would think that the real benefit would need to be measured as the excess that society was willing to pay for a social science post-grad degree holder compared to someone without..?

*** Incidentally, this is also my major reason for supporting research capacity building in the south – I think it is unrealistic to expect that building research capacity is going to yield returns via creation of new knowledge/technology – at least in the short term. But I do think that society benefits from having highly trained scientific thinkers who are able to adapt and use research knowledge and have influence on policy either by serving as policy makers themselves or by exerting evidence-informed influence.


1 Comment

The politics of evidence supply and demand

I have written before about the separate functions of evidence supply and demand. To recap, supply concerns the production and communication of research findings while demand concerns the uptake and usage of evidence. While this model can be a useful way to think about the process of evidence-informed policy making, it has been criticised for being too high level and not really explaining what evidence supply and demand looks like in the real world – and in particular in developing countries.

I was therefore really pleased to see this paper from the CLEAR centre at the University of Witwatersrand which examines in some detail what supply and demand for evidence, in this case specifically evaluation evidence, looks like in five African countries.

What is particularly innovative about this study is that they compare the results of their assessments of evaluation of supply and demand with a political economy analysis and come up with some thought-provoking ideas about how to promote the evidence agenda in different contexts. In particular, they divide their five case study countries into two broad categories and suggest some generalisable rules for how evidence fits in to each.

Developmental patrimonial: the ‘benevolent dictator’

Two of the countries – Ethiopia and Rwanda – they categorise as broadly developmental patrimonial. In these countries, there is strong centralised leadership with little scope for external actors to influence. Perhaps surprisingly, in these countries there is relatively high endogenous demand for evidence; the central governments have a strong incentive to achieve developmental outcomes in order to maintain the government’s legitimacy and therefore, at least in some cases, look for evaluation evidence to inform what they do. These countries also have relatively strong technocratic ministries which may be more able to deal with evidence than those in some other countries. It is important to point out that these countries are not consistently and systematically using research evidence to inform decisions and that in general they are more comfortable with impact evaluation evidence which has clear pre-determined goals rather than evidence which questions values. But there does seem to be some existing demand and perhaps the potential for more in the future. When it comes to supply of evaluations, the picture is less positive: although there are examples of good supply, in general there is a lack of expertise in evaluations, and most evaluations are led by northern experts.

Neopatrimonial: a struggle for power and influence

The other three countries – Malawi, Zambia and Ghana – are categorised as broadly neopatrimonial. These countries are characterised by patronage-based decision making. There are multiple interest groups which are competing for influence and power largely via informal processes. Government ministries are weaker and stated policy may bear little relationship to what actually happens. Furthermore, line ministries are less influenced by Treasury and thus incentives for evidence from treasury are less likely to have an effect. However, the existance of multiple influential groups does mean that there are more diverse potential entry points for evidence to feed into policy discussions. Despite these major differences in demand for evidence, evaluation supply in these countries was remarkably similar to that in developmental patrimonial countries – i.e. some examples of good supply but in general relatively low capacity and reliance on external experts.

I have attempted to summarise the differences between these two categories of countries – as well as the commonalities – are summarised in the table below.

eval tableThere are a couple of key conclusions which I drew from this paper. Firstly, if we are interested in supporting the demand for evidence in a given country, it is vital to understand the political situation to identify entry points where there is potential to make some progress on use of evidence. The second point is that capacity to carry out evaluations remains very low despite a large number of evaluation capacity building initiatives. It will be important to understand whether existing initiatives are heading in the right direction and will produce stronger capacity to carry out evaluations in due course – or whether there is a need to rethink the approach.


2 Comments

Science to the rescue: the big tech transfer myth

This is part 2 of a series of blogs – read part 1 here.

As I mentioned in yesterday’s blog, DFID’s recent lit review on links between science and development started by figuring out how people think science leads to development outcomes. By far the most common justification for investment in research given by developing country policy makers was its expected contribution to economic growth. The Nigerian Science, Technology and Innovation Policy is typical of many in stating:

“Specifically, the new [Science, Technology and Innovation] Policy is designed to provide a strong platform for science, technology and innovation engagements with the private sector for the purpose of promoting sound economic transformation that is citizen centred”

This focus is not likely to surprise anyone who has attended conferences related to science and international development; huge faith is put into Science, Technology and Innovation as drivers of economic development. If evidence is required to back this up, the example of the Asian Tiger economies, which invested in research and subsequently saw unprecedented growth, is frequently cited. For example, this recent communique from the Governments of Ethiopia, Mozambique, Rwanda, Senegal, and Uganda states:

“Inspired by the recent success of the Asian Tigers, We, the African governments, resolve to adopt a strategy that uses strategic investments in science and technology to accelerate Africa’s development into a developed knowledge-based society within one generation.”

If pressed on how research will lead to growth, it is typical to hear statements based broadly on endogenous growth theory: research will lead to new knowledge which will contribute to private sector development which will lead to growth.

So, what does the evidence tell us?

Well for a start, contrary to popular belief, there is little evidence to suggest that public investment in research was a major factor in the economic development of the Asian Tigers. Theories about what did cause this ‘development miracle’ abound but they can be broadly split into two categories. There are those who believe that increased growth was simply due to increased financial investments in the economy – and clearly this camp does not think that public investment in research played much of a role. Then there are those who believe that ‘knowledge’ was a key factor in explaining the rapid growth. At first glance this theory seems consistent with those who advocate for public investment in research to stimulate growth – but when you delve deeper you see that even this latter camp does not suggest that publicly-funded research knowledge was a major driver of growth. In fact, detailed case studies suggest that knowledge which drove growth was accumulated mainly through learning from the processes and technologies of more developed economies and gradually developing in-house R&D capabilities.

A world bank paper from 1999 summarises the findings of case studies from a range of ‘Asian Tiger’ firms as follows:

“. . . firm histories provide details of a complex interactive process in which. . . importers furnished some knowledge of production engineering. . . [Firms] were forced to constantly reduce cost through improving productivity.”

Of course just because public investment in research did not lead to past economic transformations doesn’t mean that it can’t do so in the future. There are many examples of initiatives specifically aimed at stimulating economic growth through publicly-funded research. Perhaps the most well-known – and popular – intervention is the establishment of a ‘science park’. These are generally property developments located near universities which aim to support technology start-ups and companies which ‘spin off’ from the university. They aim to emulate successful technology hubs in the USA, in particular Silicon Valley. The idea is that research in the university will lead to products and technologies which will be commercialised by start-up companies located in the science park.

There has been an explosion of science parks in emerging and developing countries. However, the evidence of their success is less abundant. Beyond a few high profile science parks linked to world-leading universities, there is little evidence that science parks actually succeed in supporting the commercialisation of university-generated research results; studies of science parks from both high-income and middle-income countries demonstrate an almost complete lack of technology transfer from universities to firms. Firms do report some advantages to location in science parks including access to information and informal linkages with academic colleagues. However there is little evidence that firms perform better in science parks than if they were located elsewhere.

In a 2004 article, Professor Stuart Macdonald of the University of Sheffield and Yunfeng Deng of Qingdao National Hi-Tech Industrial Zone describe science parks in developing countries as ‘pure cargo cult’ – aiming to superficially emulate Silicon Valley without creating any of the underlying factors which were necessary for its success. They conclude:

“. . .despite all the enthusiasm, there is little evidence that science parks work as their supporters say, and growing evidence that they do not.”

What could possibly go wrong?

What could possibly go wrong?

Other interventions to drive economic development by supporting technology transfer are not much more promising. Technology transfer offices have been set up in many universities world-wide – however, the evidence shows that the vast majority of such offices work at a loss. In fact, patenting and licensing of academic research knowledge only generates significant income for the very top tier (i.e. the top few percent in global rankings) of universities internationally. A 2005 (paywalled) paper by University of Cape Town academic Dr Anthony Heher, which aims to draw conclusions for developing countries from university licensing data from wealthier economies, concludes:

“Without a well-funded, high quality research system, it is unlikely that a technology transfer programme will make any significant contribution to economic development. It is also not clear that any other country can simply emulate performance of the USA in deriving benefit from technology transfer due to differing social and economic conditions.”

Given this, it seems unlikely that technology transfer from universities is likely to have significant impact on economic development in most developing countries in the short to medium term. In fact, there is evidence that this is unrealistic even in developed countries. A recent article by Times science columnist Matt Ridley concluded that:

“The idea that innovation happens because you put science in one end of the pipe and technology comes out the other end goes back to Francis Bacon, and it is largely wrong.”

There is one silver lining to the evidence on public research and economic growth – there is good evidence that the capacity to use research is an important factor in driving economic growth. And that fact leads neatly on to tomorrow’s blog which will focus on human capital.

Part 3 now available here.


8 Comments

Science to the rescue: does investment in research drive international development?

Meanwhile, at the Kardashians...

Meanwhile, at the Kardashians…

The assertion that research/science* are crucial drivers of development is made so frequently that you could be forgiven for assuming that this is a proven fact. However, having read tens if not hundreds of books, reports and papers about science and international development, I have been struck by the distinct lack of evidence presented to back up the link. I have noticed that authors try to trick us into thinking there is evidence in a couple of ways. Firstly, they often quote eminent, famous or glamorous figures who say that research is crucial for development – and present that as if it is evidence. Alternatively, they will present anecdotes of where research has led to positive changes for poor people and use that to conclude that overall research must be a good thing. This may appear compelling at first, but I can’t help thinking that if I wanted to make the case for gambling, I could probably find quite a number of people whose lives had been improved by winning the lottery – and I am not sure anyone would accept that as good evidence that gambling is overall a great idea.

For me, the question we need to ask is not can research ever lead to good outcomes – but rather, on average, does investment in research lead to better development outcomes than investing an equivalent amount of funds in an alternative intervention?

Luckily for you, DFID has recently produced a literature review which attempted to capture the evidence relating to that question – and even better, I was the lead author. And so, I thought I would write a series of blogs summarising the paper.

The starting point for the lit review was to understand how people think research leads to socio-economic development. To uncover this, research policy documents from developing country governments and major development donors were examined – and informal discussions were held with key actors in the sector. Four common justifications for investing in research to drive development emerged: research was proposed to drive economic growth; to improve human capital; to generate new products and technologies that benefit the poor; and to support evidence-informed policy and practice. Over the next few blogs, I will look at each of these proposed pathways and summarise what the evidence tells us.

It will probably come as no surprise to readers that the answer to the research question I pose above is ‘sometimes’. But the evidence on this topic reveals that the links between research and socioeconomic development are fascinating, complicated and, occasionally, very much at odds with conventional wisdom.

If you want to get subsequent blogs direct to your email just click the ‘follow blog by email’ button on the top right of this post. And if you are too impatient to wait for further blogs, you can go ahead and read the full paper here!

Part 2 is now available here.

*I’ll use these terms interchangeably in this series of blogs


4 Comments

Experimental methodologies… and baby pandas

Another week, another blog pointing out that RCTs are not the ‘gold standard’ of evidence despite the fact that NOBODY is saying they are. To be fair to the blogger, he is simply summarising a paper written by Angus Deaton – a man who is a bit of an enigma to me. I have heard him speak and been blown away by how thoughtful, insightful and challenging he is – until he comes to the topic of RCTs when he seems to become strawmantastic. Anyway, I’ve written about misconceptions about rcts so many times in the past that I am sure you are bored of hearing me – in fact I am even bored of hearing myself drone on about it. So, in lieu of another post on this matter, I present to you links to previous posts (here, here and here)… and a picture I have drawn for you of a baby panda. Enjoy.

baby panda


5 Comments

Make love not war: bringing research rigour and context together

I’ve just spent a few days in Indonesia having meetings with some fascinating people discussing the role of think tanks in supporting evidence-informed policy. It was quite a privilege to spend time with people who had such deep and nuanced understanding of the ‘knowledge sectors’ in different parts of the world (and if you are interested in learning more, I would strongly recommend you check out some of their blogs here, here and here).

However, one point of particular interest within the formal meetings was that research quality/rigour often seemed to be framed in opposition to considerations of relevance and context. I was therefore interested to see that Lant Pritchett has also just written a blog with essentially the same theme – making the point that research rigour is less important than contextual relevance.

I found this surprising – not because I think context is unimportant – but because I do not see why the argument needs to be dichotomous. Research quality and research relevance are two important issues and the fact that some research is not contextually relevant does not in any way negate the fact that some research is not good quality.

How not to move a discussion forward

To illustrate this, let’s consider a matrix comparing quality with relevance.

Low Quality High Quality
Low contextual understanding The stuff which I think we can all agree is pointless Rigorous research which is actually looking at   irrelevant/inappropriate questions due to poor understanding of context
High contextual understanding Research which is based on deep understanding of context   but which is prone to bias due to poor methodology The good stuff! Research which is informed by good contextual understanding and which uses high quality methods to investigate   relevant questions.

Let me give some examples from each of these categories:

Low quality low contextual understanding

I am loath to give any examples for this box since it will just offend people – but I would include in this category any research which involves a researcher with little or no understanding of the context ‘parachuting in’ and then passing off their opinions as credible research.

High quality, low contextual understanding

An example of this is here – a research study on microbicides to prevent the transmission of HIV which was carried out in Zambia. This research used an experimental methodology – the most rigorous approach one can use when seeking to prove causal linkages. However the qualitatitve research strand which was run alongside the trial demonstrated that due to poor understanding of sexual behaviours in the context they were working in, the experimental data were flawed.

Low quality, high contextual understanding

An example of this is research to understand the links between investment in research and the quality of university education which relies on interviews and case studies with academics. These academics have very high understanding of the context of the university sector and you can therefore see why people would choose to ask them this questions. However repeated studies show that academics almost universally believe that investment in research is crucial to drive up the quality of education within universities while repeated rigorous empirical studies, reveal that the relationship between research and education quality is actually zero.

High quality, high contextual understanding

An example here could be this set of four studies of African policy debates. The author spent extended periods of time in each location and made every effort to understand the context – but she also used high quality qualitative research methods to gather her data. Another example could be the CDD paper I have blogged about before where an in-depth qualitative approach to understand context was combined with a synthesis of high-quality experimental research evidence. Or the research described in this case study – an evaluation carried out in Bolivia which demonstrates how deep contextual understanding and research rigour can be combined to achieve impact.

Some organisations will be really strong on relevance but be producing material which is weak methodologically and therefore prone to bias. This is dangerous since – as described above – poor quality research may well give answers – but they may be entirely the wrong answers to the questions posed. Other organisations will be producing stuff which is highly rigorous but completely irrelevant. Again, this is at best pointless and at worst dangerous if decision makers do not recognise that it is irrelevant to the questions they are grappling with.

In fact, the funny thing is that when deciding whether to concentrate more on improving research relevance or research quality… context matters! The problem of poor quality and the problem of low contextual relevance both occur and both reduce the usefulness of the research produced – and arguing about which one is on average more damaging is not going to help improve that situation.

One final point that struck me from reading the Pritchett blog is that he appears to have a fear that a piece of evidence which shows that something works in one context will be mindlessly used to make the argument that the same intervention should be used in another. In other words, there is a concern that rigorous evidence will be used to back up normative policy advice. If evidence were to be used in that way, I would also be afraid of it – but that is fundamentally not what I consider to be evidence-informed policy making. In fact, I disagree that any research evidence ever tells anyone what they should do. Thus, I agree with Pritchett that evidence of the positive impact of low class sizes in Israel does not provide the argument that class sizes should be lowered in Kenya. But I would also suggest that such evidence does not necessarily mean that policy makers in Israel should lower class sizes. This evidence provides some information which policy makers in either context may wish to consider – hence evidence-informed policy making. The Israeli politicians may come to the conclusion that the evidence of the benefit of low class sizes is relatively strong in their context. However, they may well make a decision not to lower class sizes due to other factors – for example finances. I would still consider this decision to be evidence-informed. Conversely, the policy makers in Kenya may look at the Israeli evidence and conclude that it refers to a different context and that it may therefore not provide a useful prediction of what will happen in Kenya – however, they may decide that it is sufficient to demonstrate that in some contexts lower class sizes can improve outcomes and that that is sufficient evidence for them to take a decision to try the policy out.

In other words, political decisions are always based on multiple factors – evidence will only ever be one of them. And evidence from alternative contexts can still provide useful information – providing you don’t overinterpret that information and assume that something that works in one context will automatically transfer to another.


Leave a comment

Should we be worried about policy makers’ use of evidence?

A couple of papers have come out this week on policy makers’ use of evidence.

policy makers

Policy makers are apparently floating around in their own little bubbles – but should this be a cause for concern?

The first is a really interesting blog by Mark Chataway, a consultant who has spent recent months interviewing policy makers (thanks to @PrachiSrivas for sharing this with me). His conclusion after speaking to a large number of global health and development policy makers, is that most of them live in a very small bubble. They do not read widely and instead rely on information shared with them via twitter, blogs or email summaries.

The blog is a good read – and I look forward to reading the full report when it comes out – but I don’t find it particularly shocking and actually, I don’t find it particularly worrying.

No policymaker is going to be able to keep abreast of all the new research findings in his/her field of interest. Even those people who do read some of the excellent specialist sources mentioned in the article will only ever get a small sample of the new information that is being generated. In fact, trying to prospectively stay informed about all research findings of potential future relevance is an incredibly inefficient way to achieve evidence-informed decision-making. For me, a far more important question is whether decision makers  access, understand and apply relevant research knowledge at the point at which an actual decision is being made.

Enter DFID’s first ever Evidence Survey – the results of which were published externally this week.

This survey (which I hear was carried out by a particularly attractive team of DFID staff) looked at a sample of staff across grades (from grade ‘B1d to SCS’ in case that means anything to you..) and across specialities.

So, should we be confident about DFID staff’s use of evidence?

Well, partly…

The good news is that DFID staff seem to value evidence really highly. In fact, as the author of the report gloats, there is even evidence that DFID values evidence more than the World Bank (although if you look closely you will see this is a bit unfair to our World Bank colleagues since the questions asked were slightly different).

And there was recognition that the process for getting new programmes approved does require staff to find and use evidence. The DFID business case requires staff to analyse the evidence base which underlies the ‘strategic need’ and the evidence which backs up different options for intervening. Guidance on how to assess evidence is provided. The business case is scrutinised by a chain of managers and eventually a government minister. Controversial or expensive (over £40m) business cases have an additional round of scrutiny from the internal Quality Assurance Unit.

Which is all great…

But one problem which is revealed by the Evidence Survey, and by recent internal reviews of DFID process, is that there is a tendency to forget about evidence once a programme is initiated. Anyone who has worked in development knows that we work in complex and changing environments and that there is usually not clear evidence of ‘what works’. For this reason it is vital that development organisations are able to continue to gather and reflect on emerging evidence and adapt to optimise along the way.

A number of people on Twitter have also picked up on the fact that a large proportion of DFID staff failed some of the technical questions – on research methodologies, statistics etc. Actually, this doesn’t worry me too much since most of the staff covered by the survey will never have any need to commission research or carry out primary analysis. What I think is more important is whether staff have access to the right levels of expertise at the times when they need it. There were some hints that staff would welcome more support and training so that they were better equipped to deal with evidence.

A final area for potential improvement would be on management prioritisation of evidence. Encouragingly, most staff felt that evidence had become more of a priority over recent years – but they also tended to think that they valued evidence more than their managers did – suggesting a continued need for managers to prioritise this.

So, DFID is doing well in some areas, but clearly has some areas it could improve on. The key for me will be to ensure there are processes, incentives and capacity to incorporate evidence at all key decision points in a programme cycle. From the results of the survey, it seems that a lot of progress has been made and I for one am excited to try to get even better.


4 Comments

Nerds without borders – Justin Sandefur

It’s the last in the series of Nerds without Borders but don’t worry, it’s a good one… it’s only the Centre for Global Development’s JUSTIN SANDEFUR! Find him on twitter as @JustinSandefur

I'm not trying to start rumours*, but has anyone ever seen these two men in the same room??

I’m not trying to start rumours*, but has anyone ever seen these two men in the same room??

1. What flavour of nerdy scientist/researcher are you?I”m an economist.  I’m usually reluctant to call myself a scientist, as I have mixed feelings about the physics-envy that infects a lot of the social sciences.  But for the purposes of your blog series on nerds, I’m happy to play the part.  To play up the nerdy part, I guess you could call me an applied micro-econometrician.  I live amongst the sub-species of economists obsessed with teasing out causation from correlations in statistical data.  In the simplest cases (conceptually, not logistically), that means running randomized evaluations of development projects.

By way of education, I spent far too many years studying economics: masters, doctorate, and then the academic purgatory known as a post-doc.  But my training was pretty hands on, which is what made it bearable.  Throughout grad school I worked at Oxford’s Centre for the Study of African Economies, running field projects in Kenya, Tanzania, Ghana, Liberia, and Sierra Leone on a wide range of topics — from education to land rights to poverty measurement.

 2. What do you do now?

I’m a research fellow at the Center for Global Development (CGD) in Washington, D.C.  CGD is a smallish policy think tank.  If most of development economics can be characterized (perhaps unfairly) as giving poor countries unsolicited and often unwelcome policy advice, CGD tries to turn that lens back around on rich countries and analyze their development policies in areas like trade, climate, immigration, security, and of course aid.

But getting to your question about what I actually do on a day to day basis: a lot of my work looks similar to academic research.  The unofficial CGD slogan on the company t-shirts used to be “ending global poverty, one regression at a time.”  So I still  spend a good chunk of my time in front of Stata running regressions and writing papers.

3. What has research got to do with international development?

That’s a question we spend a lot of time wrestling with at CGD.  Observing my colleagues, I can see a few different models at work, and I’m not sure I’d come down in favor of one over the others.

The first is the “solutionism” model, to use a less-than-charitable name.  I think this is the mental model of how research should inform policy that an increasing number of development economists adhere to.  Researchers come up with new ideas and test promising policy proposals to figure out what will work and what won’t.  Once they have a solution, they disseminate those findings to policymakers who will hopefully adopt their solutions.  Rarely is the world so linear in practice, but it’s a great model in theory.

The second approach is much more indirect, but maybe more plausible.  I’ll call it a framing model for lack of a better term. Research provides the big picture narrative and interpretive framework in which development policymakers make decisions.  Dani Rodrik has a fascinating new paper where he makes the argument that research — “the ideas of some long-dead economist”, as Keynes put it — often trumps vested interests by influencing policymakers’ preferences, shaping their view of how the world works and thus the constraints they feel they face, and altering the set of policy options that their advisers offer them.

My third model of what research has to do with development policymaking is borderline cynical: let’s call it a vetting model.  The result of your narrow little research project rarely provides the answer to any actual policy question.  But research builds expertise, and the peer review publication process establishes the credibility of independent scientific experts in a given field.  And that — rather than specific research results — is often what policymakers are looking for, in development and elsewhere.  Someone who knows what they’re talking about, and is well versed in the literature, and whose credentials are beyond dispute, who can come in and provide expert advice.

I moved to DC as a firm believer in the first model.  CGD gradually pulled me toward the second model.  But when I observe the interface between research and development policymaking in this town, I feel like the third model probably has the most empirical support.

4. What have you been up to recently?

Too many things, but let me pick just one.

This week I’m trying to finally finish up a long overdue paper on the role of development aid during the war in Afghanistan, together with my colleagues Charles Kenny and Sarah Dykstra.  We measure changes over time in aid to various Afghan districts, and look for effects on economic development, public opinion (in favor of the Karzai government and/or the Taliban), and ultimately the level of violence as measured by civilian and military casualties.  To make a long story short: we find some modest bust statistically significant economic return to the billions of dollars spent in aid — even though it was targeted to the most violent, least poor areas.  But we see no effects on either public opinion or violence.

Interestingly, changes over time in public opinion and violence move together quite significantly, in line with some of the basic tenets of counterinsurgency warfare.  But as far as we can measure in Afghanistan, development aid has proven fairly ineffective, on average, at affecting those non-economic outcomes.  Even where households are getting richer and are more satisfied with government services, we see no significant change in support for insurgent groups let alone any decline in violence.

5. What advice would you give to other science types who want to work in development?

People will tell you to get some practical experience, to broaden your interests, to develop your non-research skill set, and so on.  Ignore all that.  Development doesn’t need more smooth-talking development policy experts; development needs world-class experts in specific and often very technical fields.  Follow your research interests and immerse yourself in the content.  If you know what you’re talking about, the rest will fall into place.

 6. Tell us something to make us smile?

I don’t think I believe the advice I just offered under the previous question.  Nor have I really followed it.  But I want to believe it, so hopefully that counts for something.

Thanks Justin – and indeed all my wonderful nerds. It’s been so interesting to hear about everyone’s different career paths and views on research and international development. See the rest of them here: part 1, 2, 3, 4, 5 and 6.

*I am


2 Comments

Nerds without borders – Beth Scott

Today’s nerd is a colleague of mine from the Department of International Development – the most excellent (and only occasionally scary) Beth Scott…

1. What flavour of nerdy scientist are you?

I couldn't persuade Beth to give me a photo of her so I have drawn a beautiful portrait of her!

I couldn’t persuade Beth to give me a photo of her so I have drawn a beautiful portrait of her. Luckily, I think she will agree that it is an excellent likeness.

Just because I’m an anthropologist/behavioural scientist do NOT tell me I am ‘not a proper scientist’…Science A-levels, followed by a BA (Anthropology) and then an MSc (Control of Infectious Diseases) and a long stint as a Research Fellow, I do qual and quant work and am probably best described as a bit of a mixed up scientist, or maybe I’m the equivalent to ambi-dextrous? And confession time…I’m a total monitoring and evaluation geek – don’t design your metrics right and you won’t deliver an effective intervention.

2. What do you do now?
I’m a ‘Health Advisor’ in DFID working in a team that commissions international health research, from the development of new drugs through to systems-based research to improve healthcare delivery in less-developed countries.

3. What has research got to do with international development?
Everything – I mean, do you simply make up your interventions and hope they work? Research is critical to telling us what works, how it works, informing programme design and delivery, measuring impact etc etc. It’s just that people find the word ‘research’ scary (and the words monitoring and evaluation even scarier) and we have to find other ways to describe what we’re doing sometimes.

4. What have you been up to recently?
All sorts. I’ve spent a weekend sitting in a dark hotel room near Chales de Galle airport, at a Research Programme Consortium Partners’ meeting as they bash out a new cross-country research project; there were a couple of days at the Product Development Partnerships Funders Group hearing about all sorts of wonderful advancements in drug and diagnostics discovery for malaria and neglected tropical diseases; and I’ve launched a call for a programme of implementation research to improve the delivery of integrated neglected tropical disease control programmes on the ground.

5. What advice would you give to other science types who want to work in development?

You’ve got to spend time overseas and make the time to really understand the reality of people’s lives on the ground. Always remain focussed on the impact you are hoping to having rather than starting with ‘what’ you want to do (outcomes rather than inputs driven).

6. Tell us something to make us smile?
I’m off to scare people for the weekend. Literally. I will don a big black cape and a mask and jump out of the shadows at people as they travel through the ‘Scaresville’ experience at Kentwell Hall in Suffolk. A brilliant way to unwind and offload stress at the end of a busy period at work 🙂

Read the other interviews: 1, 2, 3, 4 and 5.