kirstyevidence

Musings on research, international development and other stuff


2 Comments

Impact via infiltration

Two blogs ago I linked to an article which I attributed to “the ever-sensible Michael Clements”. Shortly afterwards @m_clem tweeted:

.@kirstyevidence compares @JustinSandefur to @jtimberlake 🔥—but me, I’m “sensible”.Some guys got it, some don’t. https://t.co/KphJ0ITmr8

I mention this in part because it made me chortle but mainly because it prompted me to look back at that Justin Sandefur interview. And on re-reading it, I was really struck by one of Sandefur’s models for how research relates to policy:

My third model of what research has to do with development policymaking is borderline cynical: let’s call it a vetting model.  The result of your narrow little research project rarely provides the answer to any actual policy question.  But research builds expertise, and the peer review publication process establishes the credibility of independent scientific experts in a given field.  And that — rather than specific research results — is often what policymakers are looking for, in development and elsewhere.  Someone who knows what they’re talking about, and is well versed in the literature, and whose credentials are beyond dispute, who can come in and provide expert advice.

Since that interview was published, I wrote a literature review for DFID which looked at the impact of research on development. And, having spent months of my life scouring the literature, I am more convinced than ever that the Sandefur/Timberlake effect (as it will henceforth be known) is one of the main ways in which investment in research leads to change.

This pathway can be seen clearly in the careers of successful researchers who become policy makers/advisors. For example, within DFID, the chief scientist and chief economist are respected researchers. But the significant impacts they have had on policy decisions within DFID must surely rival the impacts on society they have had via their academic outputs?

And the case may be even stronger if you also examine ‘failed scientists’ – like, for example, me! The UK Medical Research Council invested considerable amounts of funding to support my PhD studies and post-doc career. And I would summarise the societal impact of my research days as… pretty much zilch. I mean, my PhD research was never even published and my post-doc research was on a topic which was niche even within the field of protozoan parasite immunology.

Undercover nerds: creating societal impact all around you?

Undercover nerds – surely the societal impact of all those current and former academics goes beyond their narrow research findings?

In other words, I wouldn’t have to be very influential within development to achieve more impact than I did in my academic career. My successful campaign while working at the Wellcome Trust to get the canteen to stock diet Irn Bru probably surpasses my scientific contributions to society! But more seriously, I do think that the knowledge of research approaches, the discipline of empirical thinking, the familiarity with academic culture – and, on occasion, the credibility of being a ‘Dr’ – have really helped me in my career. Therefore, any positive – or indeed negative – impact that I have had can partly be attributed to my scientific training.

Of course, just looking at isolated individual researchers won’t tell us whether, overall, investment in research leads to positive societal impact – and if so, whether the “S/T effect” (I’m pretty sure this is going to catch on so I have shortened it for ease) is the major route through which that impact is achieved. Someone needs to do some research on this if we are going to figure out if it really is a/the major way in which research impacts policy/practice.

But it’s interesting to note that other people have a similar hypothesis: Bastow, Tinkler and Dunleavy carried out a major analysis of the impact of social science in the UK* and their method for calculating the benefit to society of social science investments was to estimate the amount that society pays to employ individuals with post-grad social science degrees.** In other words they assumed that the major worth of all that investment in social science was not in its findings but in the generation of experts. I think the fact that the authors are experimenting with new methodologies to explain the value of research that go beyond the outdated linear model is fabulous.

But wait, you may be wondering, does any of this matter? Well yes, I think it does because a lot of time and energy are being put into the quest to measure the societal impact of research. And in many cases the impact is narrowly defined as the direct effect of research findings and/or research derived technologies. The recent REF impact case studies did capture more diverse impacts including some that could be classified within the S/T™ effect. But I still get the impression that such indirect effects are seen as secondary and unimportant. The holy grail for research impact still seems to be linear, direct, instrumental impact on policy/practice/the economy – despite the fact that:

  1. This rarely happens
  2. Even when we think this is happening, there is a good chance that evidence is in fact just being used symbolically
  3. Incentivising academics to achieve direct impact with their research results can have unintended and dangerous results

Focussing attention on the indirect impact of trained researchers, not as an unimportant by-product but as a major route by which research can impact society, is surely an important way to get a more accurate understanding of the benefits (or lack thereof) of research funding.***

So, in summary, I think we can conclude that that Justin Sandefur is quite a sensible bloke.

And, by the way, have any of you noticed how much Michael Clemens resembles George Clooney?

.

* I have linked to their open access paper on the study but I also recommend their very readable book which covers it in more detail along with loads of other interesting research – and some fab infographics.

** Just to be pedantic, I wonder if their methodology needs to be tweaked slightly – they have measured value as the cost of employing social science post-grad degree holders but surely those graduates have some residual worth beyond their research training? I would think that the real benefit would need to be measured as the excess that society was willing to pay for a social science post-grad degree holder compared to someone without..?

*** Incidentally, this is also my major reason for supporting research capacity building in the south – I think it is unrealistic to expect that building research capacity is going to yield returns via creation of new knowledge/technology – at least in the short term. But I do think that society benefits from having highly trained scientific thinkers who are able to adapt and use research knowledge and have influence on policy either by serving as policy makers themselves or by exerting evidence-informed influence.


5 Comments

Race for impact in the age of austerity

I have recently been pondering what the age of austerity means for the development community. One consequence which seems inevitable is increasing scrutiny of how development funds are spent. The principle behind this is hard to argue with; money is limited and it seems both sensible and ethical to make sure that we do as much good as possible with what we have. However, the way in which costs and benefits are assessed could have a big impact on the future development landscape. Already, some organisations are taking the value for money principle to its logical conclusion and trying to assess and rank causes in terms of their ‘bang for your buck’. The Open Philanthropy project has been comparing interventions as diverse as cash transfers, lobbying for criminal justice reform and pandemic prevention, and trying to assess which offers the best investment for philanthropists (fascinating article on this here).

The Copenhagen Consensus project* is trying to do a similar thing for the sustainable development goals; using a mixture of cost-benefit analysis and expert opinion, they are attempting to quantify how much social, economic and environmental return development agencies can get by focussing on different goals. For example, they find that investing a dollar in universal access to contraception will result in an average of $120 of benefit. By contrast, they estimate that investing a dollar in vaccinating against cervical cancer will produce only $3 average return. Looking over the list of interventions and the corresponding estimated returns on investment is fascinating and slightly shocking. A number of high profile development priorities appear to give very low returns while some of the biggest returns correspond to interventions such as trade liberalisation and increased migration which are typically seen as outside the remit of development agencies (good discussion on ‘beyond-aid agenda’ to be found from Owen Barder et al. at CDG e.g. here).

In general, I find the approach of these organisations both brave and important. Of course there needs to be a lot of discussion and scrutiny of the methods before these figures are used to inform policy – for example, I had a brief look at the CC analysis of higher education and found a number of things to quibble with, and I am sure that others would find the same if they examined the analysis of their area of expertise. But the fact that the analysis is difficult does not mean one should not attempt it. I don’t think it is good enough that we continue to invest in interventions just because they are the pet causes of development workers. We owe it both to the tax payers who fund development work and to those living in poverty to do our best to ensure funds are used wisely.

Achieving measurable impacts without doing anything to address root causes

Achieving measurable impacts without doing anything to address root causes

Having said all that, my one note of caution is that there is a danger that these utilitarian approaches inadvertently skew priorities towards what is measurable at the expense of what is most important. Impacts which are most easily measured are often those achieved by solving immediate problems (excellent and nuanced discussion of this from Chris Blattman here). To subvert a well-known saying, it is relatively easy to measure the impact of giving a man a fish, more difficult to measure the impact of teaching a man to fish** and almost impossible to measure, let alone predict in advance, the impact of supporting the local ministry of agriculture to develop its internal capacity to devise and implement policies to support long-term sustainable fishing practices. Analysts in both the Copenhagen Consensus and the Open Philanthropy projects have clearly thought long and hard about this tension and seem to be making good strides towards grappling with it. However, I do worry that the trend within understaffed and highly scrutinised development agencies may be less nuanced.

So what is the solution? Well, firstly development agencies need to balance easy to measure but low impact interventions with tricky to measure but potentially high impact ones. BUT this does not mean that we should give carte blanche to those working on tricky systemic problems to use whatever shoddy approaches they fancy; too many poor development programmes have hidden behind the excuse that it is too complicated to assess them. Just because measuring and attributing impact is difficult does not mean that we can’t do anything to systemstically assess intermediate outcomes and use these to tailor interventions.

To take the example of organisational capacity building – which surely makes up a large chunk of these ‘tricky’ to measure programmes – we need to get serious about understanding what aspects of design and implementation lead to success. We need to investigate the effects of different incentives used in such projects including the thorny issue of per diems/salary supplements (seriously, why is nobody doing good research on this issue??). We need to find out what types of pedagogical approach actually work when it comes to supporting learning and then get rid of all the rubbish training that blights the sector. And we need to think seriously about the extent of local institutional buy-in required for programmes to have a chance of success – and stop naively diving into projects in the hope that the local support will come along later.

In summary, ever-increasing scrutiny of how development funds are spent is probably inevitable. However, if, rather than fearing it, we engage constructively with the discussions, we can ensure that important but tricky objectives continue to be pursued – but also that our approach to achieving them gets better.

* Edit: thanks to tribalstrategies for pointing out that Bjorn Lomborg who runs the Copenhagen Consensus has some controversial views on climate science. This underscores the need for findings from such organisations to be independently and rigorously peer reviewed.

**High five to anyone who now has an Arrested Development song on loop in their head.


5 Comments

Summer reading

I am currently making the most of my maternity leave by swanning around Europe in a campervan for 6 months. I have been thinking about a couple of new blogs which I will try to publish shortly – but luckily in the meantime, lots of other people are saying sensible things so I don’t have to. Here is a selection of things that have caught my eye in recent months….

I have been loving the output of Aidleap. They write about all sorts of development stuff including some great reflections on the evidence like this one on the poor state of development programme monitoring. I was also pleased to see this critique of the development sector’s obsession with innovation – and the hilarious accompanying tweet: “Donors talk about #Innovation like boys talk about sex – they’re incredibly excited about it but don’t know what they’re talking about”. It’s so true and it drives me crazy; who cares if something is innovative? We should care whether it works!

This is a picture of a mountain. It doesn't have anything to do with this post but I think you will agree that it is rather nice.

This is a picture of a mountain. It doesn’t have anything to do with this post but I think you will agree that it is rather nice.

Anyone in UK academia will be familiar with/traumatised by (delete as appropriate) REF impact case studies. I have blogged a lot in the past about the difficulties and potential dangers of assessing the impact of individual research projects and thus I loved this blog discussing why the REF case studies may not be a good reflection of policy impact of research.

This blog from the ever-sensible Michael Clemens tries to inject some objective evidence into the highly-charged discussions on migration. This guide to evaluation from the ODI RAPID gang does a great job of presenting a potentially difficult topic clearly. This blog from INASP about raising the profile of southern research is refreshingly practical.

And if you still have time for more, the Guardian’s new long-read section is fabulous and slightly addictive.


6 Comments

Does public (mis)understanding of science actually matter?

Many parents feel reassured by pseudoscientific treatments - although experts point out that a similar amount of reassurance could be achieved by investing the cost of treatments in wine and chocolate.

Babies suffer through a lot of bogus treatments for the sake of placebo-induced parental reassurance.

So, as regular readers know, I have recently become a mum. As I mentioned in my last post*, I was really shocked by how much pseudoscience is targeted at pregnant women. But four months after the birth, I have to tell you that it is not getting any better. What I find most concerning is just how mainstream the use of proven-not-to-work remedies are. Major supermarkets and chemists stock homeopathic teething powders; it is common to see babies wearing amber necklaces to combat teething; and I can’t seem to attend a mother and baby group without being told about the benefits of baby cranial osteopathy.

I find this preponderance of magical thinking kind of upsetting. I keep wondering why on earth we don’t teach the basics of research methodologies in high schools. But then sometimes I question whether my attitude is just yet another example of parents being judgey. I mean, other than the fact that people are wasting their money on useless treatments, does it really matter that people don’t understand research evidence? Is worrying about scientific illiteracy similar to Scottish people getting annoyed at English people who cross their hands at the beginning, rather than during the second verse, of Auld Lang Syne: i.e. technically correct but ultimately unimportant and a bit pedantic?

I guess that I have always had the hypothesis that it does matter; that if people are unable to understand the evidence behind medical interventions for annoying but self-limiting afflictions, they will also find it difficult to make evidence-informed decisions about other aspects of their lives. And crucially, they will not demand that policy makers back up their assertions about problems and potential solutions with facts.

But I have to admit that this is just my hypothesis.

So, my question to you is, what do you think? And furthermore, what are the facts? Is there any research evidence which has looked at the links between public ‘science/evidence literacy’ and decision making?? I’d be interested in your thoughts in the comments below. 

.

* Apologies by the way for the long stretch without posts – I’ve been kind of busy. I am happy to report though that I have been using my time to develop many new skills and can now, for example, give virtuoso performances of both ‘Twinkle Twinkle’ and ‘You cannae shove yer Granny’.†,‡

† For those of you unfamiliar with it, ‘You cannae shove yer Granny (aff a bus)’ is a popular children’s song in Scotland. No really. I think the fact that parents feel this is an important life lesson to pass on to their children tells you a lot about my country of birth…

‡ Incidentally, I notice that I have only been on maternity leave for 4 months and I have already resorted to nested footnotes in order to capture my chaotic thought processes. This does not bode well for my eventual reintegration into the world of work.


8 Comments

Unintended consequences: When research impact is bad for development

Development research donors are obsessed with achieving research impact and researchers themselves are feeling increasingly pressurised to prioritise communication and influence over academic quality.

To understand how we have arrived at this situation, let’s consider a little story…

Let’s imagine around 20 years ago an advisor in an (entirely hypothetical) international development agency. He is feeling rather depressed – and the reason for this is that despite the massive amount of money that they are putting into international development efforts, it still feels like a Sisyphean task. He is well aware that poverty and suffering are rife in the world and he wonders what on earth to do. Luckily this advisor is sensible and realises that what is needed is some research to understand better the contexts in which they are working and to find out what works.

Fast-forward 10 or so years and the advisor is not much happier. The problem is that lots of money has been invested in research but it seems to just remain on the shelf and isn’t making a significant impact on development. And observing this, the advisor decides that we need to get better at promoting and pushing out the research findings. Thus (more or less!) was born a veritable industry of research communication and impact. Knowledge-sharing portals were established, researchers were encouraged to get out there and meet with decision makers to ensure their findings were taken into consideration, a thousand toolkits on research communications were developed and a flurry of research activity researching ‘research communication’ was initiated.

dfid advisorBut what might be the unintended consequences of this shift in priorities? I would like to outline three case studies which demonstrate why the push for research impact is not always good for development.

First let’s look at a few research papers seeking to answer an important question in development: does decentralisation improve provision of public services. If you were to look at this paper, or this one or even this one, you might draw the conclusion that decentralisation is a bad thing. And if the authors of those papers had been incentivised to achieve impact, they might have gone out to policy makers and lobbied them not to consider decentralisation. However, a rigorous review of the literature which considered the body of evidence found that, on average, high quality research studies on decentralisation demonstrate that it is good for service provision. A similar situation can be found for interventions such as microfinance or Community Driven Development – lots of relatively poor quality studies saying they are good, but high quality evidence synthesis demonstrating that overall they don’t fulfil their promise.

My second example comes from a programme I was involved in a few years ago which aimed to bring researchers and policy makers together. Such schemes are very popular with donors since they appear to be a tangible way to facilitate research communication to policy makers. An evaluation of this scheme was carried out and one of the ‘impacts’ it reported on was that one policy maker had pledged to increase funding in the research institute of one of the researchers involved in the scheme. Now this may have been a good impact for the researcher in question – but I would need to be convinced that investment in that particular research institution happened to be the best way for that policy maker to contribute to development.

My final example is on a larger scale. Researchers played a big role in advocating for increased access to anti-HIV drugs, particularly in Africa. The outcome of this is that millions more people now have access to those drugs, and on the surface of it that seems to be a wholly wonderful thing. But there is an opportunity cost in investment in any health intervention – and some have argued that more benefit could be achieved for the public if funds in some countries were rebalanced towards other health problems. They argue that people are dying from cheaply preventable diseases because so much funding has been diverted to HIV. It is for this reason we have NICE in the UK to evaluate the cost-effectiveness of new treatments.

What these cases have in common is that in each case I feel it would be preferable for decision makers to consider the full body of evidence rather than being influenced by one research paper, researcher or research movement. Of course I recognise that this is a highly complicated situation. I have chosen three cases to make a point but there will be many more cases where researchers have influenced policy on the basis of single research studies and achieved competely positive impacts. I can also understand that a real worry for people who have just spent years trying to encourage researchers to communicate better is that the issues I outline here could cause people to give up on all their efforts and go back to their cloistered academic existence. And in any case, even if pushing for impact were always a bad thing, publically funded donors would still need to have some way to demonstrate to tax payers that their investments in research were having positive effects.

So in the end, my advice is something of a compromise. Most importantly, I think researchers should make sure they are answering important questions, using the methods most suitable to the question. I would also encourage them to communicate their findings in the context of the body of research. Meanwhile, I would urge donors to continue to support research synthesis – to complement their investments in primary research. And to support policy making processes which include consideration of bodies of research.


Leave a comment

Should we be worried about policy makers’ use of evidence?

A couple of papers have come out this week on policy makers’ use of evidence.

policy makers

Policy makers are apparently floating around in their own little bubbles – but should this be a cause for concern?

The first is a really interesting blog by Mark Chataway, a consultant who has spent recent months interviewing policy makers (thanks to @PrachiSrivas for sharing this with me). His conclusion after speaking to a large number of global health and development policy makers, is that most of them live in a very small bubble. They do not read widely and instead rely on information shared with them via twitter, blogs or email summaries.

The blog is a good read – and I look forward to reading the full report when it comes out – but I don’t find it particularly shocking and actually, I don’t find it particularly worrying.

No policymaker is going to be able to keep abreast of all the new research findings in his/her field of interest. Even those people who do read some of the excellent specialist sources mentioned in the article will only ever get a small sample of the new information that is being generated. In fact, trying to prospectively stay informed about all research findings of potential future relevance is an incredibly inefficient way to achieve evidence-informed decision-making. For me, a far more important question is whether decision makers  access, understand and apply relevant research knowledge at the point at which an actual decision is being made.

Enter DFID’s first ever Evidence Survey – the results of which were published externally this week.

This survey (which I hear was carried out by a particularly attractive team of DFID staff) looked at a sample of staff across grades (from grade ‘B1d to SCS’ in case that means anything to you..) and across specialities.

So, should we be confident about DFID staff’s use of evidence?

Well, partly…

The good news is that DFID staff seem to value evidence really highly. In fact, as the author of the report gloats, there is even evidence that DFID values evidence more than the World Bank (although if you look closely you will see this is a bit unfair to our World Bank colleagues since the questions asked were slightly different).

And there was recognition that the process for getting new programmes approved does require staff to find and use evidence. The DFID business case requires staff to analyse the evidence base which underlies the ‘strategic need’ and the evidence which backs up different options for intervening. Guidance on how to assess evidence is provided. The business case is scrutinised by a chain of managers and eventually a government minister. Controversial or expensive (over £40m) business cases have an additional round of scrutiny from the internal Quality Assurance Unit.

Which is all great…

But one problem which is revealed by the Evidence Survey, and by recent internal reviews of DFID process, is that there is a tendency to forget about evidence once a programme is initiated. Anyone who has worked in development knows that we work in complex and changing environments and that there is usually not clear evidence of ‘what works’. For this reason it is vital that development organisations are able to continue to gather and reflect on emerging evidence and adapt to optimise along the way.

A number of people on Twitter have also picked up on the fact that a large proportion of DFID staff failed some of the technical questions – on research methodologies, statistics etc. Actually, this doesn’t worry me too much since most of the staff covered by the survey will never have any need to commission research or carry out primary analysis. What I think is more important is whether staff have access to the right levels of expertise at the times when they need it. There were some hints that staff would welcome more support and training so that they were better equipped to deal with evidence.

A final area for potential improvement would be on management prioritisation of evidence. Encouragingly, most staff felt that evidence had become more of a priority over recent years – but they also tended to think that they valued evidence more than their managers did – suggesting a continued need for managers to prioritise this.

So, DFID is doing well in some areas, but clearly has some areas it could improve on. The key for me will be to ensure there are processes, incentives and capacity to incorporate evidence at all key decision points in a programme cycle. From the results of the survey, it seems that a lot of progress has been made and I for one am excited to try to get even better.


23 Comments

Supply and demand in evidence-informed policy – this time with pictures!

I have talked before about supply and demand in evidence-informed policy but I decided to revisit the topic with some sophisticated visual aids. I am aware that using the using the model of supply/demand has been criticised as over-simplifying the topic – but I still think it is a useful way to think about the connections between research evidence and policy/practice (plus, to be honest, I am fairly simple!).

You can distinguish between supply and demand by considering ‘what is the starting point?’. If you are starting with the research (whether its a single piece of research or a body of research on a given topic) and considering how it may achieve policy influence, you are on the supply side…

In contrast, those on the demand side, typically start with a decision (or a decision-making process) and consider how research can feed into this decision…

#This distinction may seem obvious, but I think it is often missed. What this means in practice is an explosion of approaches to evidence-informed policy/practice which attempt to push more and more evidence out there in expectation that more supply will lead to a better world…

 

One problem with this is that if your supply approaches focus on just one research project – or one side of a debate – they risk going against evidence-informed policy

 

*Science monster usually lives here elodieunderglass.wordpress.com/ – she is just visiting my blog today

 

Some supply approaches do aim to increase access to a range of research and to synthesise and communicate where the weight of evidence lies. However, even these approaches are destined to fail if there is not a corresponding increase in demand…

 

I think we should continue to support supply-side activities but I  think we also need to get better at supporting the demand. So what would this look like in practice?

For me the two components of demand are the motivation (whether intrinsic or extrinsic) and the capacity (i.e. the knowledge, skills, attitudes, structures, systems etc) to use research. In other words, you need to want to use research and you need to be able to do so.

Motivation can be improved by enhancing the organisational  culture of evidence use – but also by putting systems in place which mandate and/or reward evidence use…

Achieving this in practice needs the support of senior decision makers within a policy making institution. So for example the UK Department for International Development has transformed the incentives to use research evidence since Prof Chris Whitty came in as the Chief Scientific Advisor and Head of Research.

But incentives on their own are not enough. There also needs to be capacity and it needs to exist at multiple levels; at an organisational level, there needs to be structural capacity such as adequate internet bandwidth, access to relevant academic journals etc etc. At an individual level, those involved in the policy making process need to be ‘evidence-literate’ – i.e. they need to know whaat research evidence is, where they can find it, how they can appraise it, how to draw lessons from evidence for policy decisions etc etc…

Achieving this may require a new recruitment strategy – selecting people for employment who already have a good understanding of research evidence. But continuing professional development courses can also be used to ‘upskill’ existing staff.

Anyway, the above is basically a pictural summary of this paper in the IDS bulletin so if you would like to read about the same topic in more academic terms (and without the pictures!) please do check it out. Its not open access I’m afraid so if you want a copy please tweet me @kirstyevidence or leave a comment below.

Hope you liked the pictures!