kirstyevidence

Musings on research, international development and other stuff


8 Comments

Unintended consequences: When research impact is bad for development

Development research donors are obsessed with achieving research impact and researchers themselves are feeling increasingly pressurised to prioritise communication and influence over academic quality.

To understand how we have arrived at this situation, let’s consider a little story…

Let’s imagine around 20 years ago an advisor in an (entirely hypothetical) international development agency. He is feeling rather depressed – and the reason for this is that despite the massive amount of money that they are putting into international development efforts, it still feels like a Sisyphean task. He is well aware that poverty and suffering are rife in the world and he wonders what on earth to do. Luckily this advisor is sensible and realises that what is needed is some research to understand better the contexts in which they are working and to find out what works.

Fast-forward 10 or so years and the advisor is not much happier. The problem is that lots of money has been invested in research but it seems to just remain on the shelf and isn’t making a significant impact on development. And observing this, the advisor decides that we need to get better at promoting and pushing out the research findings. Thus (more or less!) was born a veritable industry of research communication and impact. Knowledge-sharing portals were established, researchers were encouraged to get out there and meet with decision makers to ensure their findings were taken into consideration, a thousand toolkits on research communications were developed and a flurry of research activity researching ‘research communication’ was initiated.

dfid advisorBut what might be the unintended consequences of this shift in priorities? I would like to outline three case studies which demonstrate why the push for research impact is not always good for development.

First let’s look at a few research papers seeking to answer an important question in development: does decentralisation improve provision of public services. If you were to look at this paper, or this one or even this one, you might draw the conclusion that decentralisation is a bad thing. And if the authors of those papers had been incentivised to achieve impact, they might have gone out to policy makers and lobbied them not to consider decentralisation. However, a rigorous review of the literature which considered the body of evidence found that, on average, high quality research studies on decentralisation demonstrate that it is good for service provision. A similar situation can be found for interventions such as microfinance or Community Driven Development – lots of relatively poor quality studies saying they are good, but high quality evidence synthesis demonstrating that overall they don’t fulfil their promise.

My second example comes from a programme I was involved in a few years ago which aimed to bring researchers and policy makers together. Such schemes are very popular with donors since they appear to be a tangible way to facilitate research communication to policy makers. An evaluation of this scheme was carried out and one of the ‘impacts’ it reported on was that one policy maker had pledged to increase funding in the research institute of one of the researchers involved in the scheme. Now this may have been a good impact for the researcher in question – but I would need to be convinced that investment in that particular research institution happened to be the best way for that policy maker to contribute to development.

My final example is on a larger scale. Researchers played a big role in advocating for increased access to anti-HIV drugs, particularly in Africa. The outcome of this is that millions more people now have access to those drugs, and on the surface of it that seems to be a wholly wonderful thing. But there is an opportunity cost in investment in any health intervention – and some have argued that more benefit could be achieved for the public if funds in some countries were rebalanced towards other health problems. They argue that people are dying from cheaply preventable diseases because so much funding has been diverted to HIV. It is for this reason we have NICE in the UK to evaluate the cost-effectiveness of new treatments.

What these cases have in common is that in each case I feel it would be preferable for decision makers to consider the full body of evidence rather than being influenced by one research paper, researcher or research movement. Of course I recognise that this is a highly complicated situation. I have chosen three cases to make a point but there will be many more cases where researchers have influenced policy on the basis of single research studies and achieved competely positive impacts. I can also understand that a real worry for people who have just spent years trying to encourage researchers to communicate better is that the issues I outline here could cause people to give up on all their efforts and go back to their cloistered academic existence. And in any case, even if pushing for impact were always a bad thing, publically funded donors would still need to have some way to demonstrate to tax payers that their investments in research were having positive effects.

So in the end, my advice is something of a compromise. Most importantly, I think researchers should make sure they are answering important questions, using the methods most suitable to the question. I would also encourage them to communicate their findings in the context of the body of research. Meanwhile, I would urge donors to continue to support research synthesis – to complement their investments in primary research. And to support policy making processes which include consideration of bodies of research.


1 Comment

Improving on systematic reviews

A fast-track to blogging success in the development field is to pick a research approach (RCTs, econometrics, rigorous synthesis, qualitative research etc), ‘reveal’ that there are some drawbacks of said approach, and go on to conclude that research is bad (or at least highly suspect). Whenever I see such an article, it strikes me as a little akin to giving up on our judicial system on the basis that sometimes there are miscarriages of justice:

paperscissors
I mean, clearly, any method of gathering evidence to inform decisions has limitations. And of course the people making decisions will be informed by a whole lot of other factors. But these facts don’t make me want to give up on evidence entirely but rather inspire me to think about how we can reduce the drawbacks of research approaches and/or support people and processes so that evidence is routinely considered as one part of the decision-making process.

So, it was with happiness that I read this new paper from the ODI. The paper gives an overview of both ‘traditional’ literature reviews and systematic reviews and outlines some drawbacks of each. However, rather than taking the approach of declaring both useless, the authors go on to propose an intermediate approach which combines:

“..compliance with the broad systematic review principles (rigour, transparency, replicability) and flexibility to tailor the process towards improving the quality of the overall findings, particularly if time and budgets are constrained”.

What makes this paper particularly useful is that it sets out a clear process with 8 steps which potential authors can follow. They give plenty of detail of how each stage can be carried out and they include a wealth of useful tips (for example, I learnt about the concept of ‘forward-snowballing’ – who knew?). I think that many people find the idea of carrying out a rigorous review quite intimidating and will find this an invaluable guide. I also love the inclusion of a graphical representation of synthesised evidence – as I have mentioned before, I think we need to get more inventive at communicating bodies of evidence.

The authors don’t shy away from discussing the challenges of their proposed approach – with particular attention paid to the difficulties of assessing the ‘strength’ of evidence. I would tend to be slightly more positive about attempting some type of assessment of evidence strength – and I am not sympathetic to the argument that authors are unable to include method sections due to restrictive word count rules (have you seen how long some academic papers from the ODI are?!). Having said that, I do completely agree with the authors that this is the most challenging – and the most political – part of evidence synthesis and that there will always be a degree of subjectivity.

I did think the authors fell slightly into strawman territory when they list how their approach differs from SRs. A few of the differences do not really exist. For example, they mention that meta-analysis is not a useful way to synthesise data for many topics – which is true – but meta-analysis is by no means a necessary part of an SR. I would hazard a guess that most systematic reviews in development research do not use meta-analysis – see here and here for examples. They also imply that SRs do not include grey literature. This is definitely not true – any good SR should include a thorough search strategy which includes grey literature. See for example this guidance from the EPPI centre which states:

“In most approaches to systematic reviewing the aim is to produce a comprehensive and unbiased set of research relevant to the review question. Being comprehensive means that the search strategy attempts to uncover published and unpublished, easily accessible and harder to find reports of research studies.”

I do wonder whether these statements were true about some earlier SRs – for example, perhaps meta-analysis has been used inappropriately in the past, and I am sure that not all SRs (particularly when they were first introduced to the development research field) did a good job of capturing non-journal published material. This might explain the impressions reflected in the paper.

In any case, these are minor quibbles. Overall, I think it’s a good and useful paper, and I do hope that it will stimulate more people to think about how we can synthesise evidence in a way which is as objective as possible but is also practical.


4 Comments

The art and science of presenting synthesised evidence

In a previous post, I tried to persuade you that synthesising evidence is a good idea for development. But everyone knows that busy policy makers are unlikely to read 100 page long evidence synthesis products. So what are the key messages that you need to convey from a synthesis product and how can you present these? Or in other words, how can you summarise your synthesis? Here are a few top tips…

1. Make it short… and make it pretty

There is a myth that intelligent people should be able to wade through longer documents and that aesthetics are beneath them. Codswallop, I say. People, whether intelligent or less so, love a nice picture and some attractive formatting. So if you want to get your message across, I suggest getting in touch with your inner artist. For the best quality products, you may want to teach yourself to use a desktop publishing software – Scribus is open source and pretty easy to learn to use. Alternatively, you can create very attractive short documents using templates and styles on Word. Whichever programme you use, make sure you have a nice palette – some great ideas can be found here. And if you are using pictures, keep it legal by using creative commons licenced pictures (you can search for them on Flickr or check out the marvelous morguefile) and attributing correctly.

2. Be explicit about your methodologymethodology

Policy makers who are serious about evidence-informed decision makers are not going to believe that the evidence says something just because you say it does. They will need to be convinced that those writing the synthesis used an appropriate methodology to find, select and draw meaning from the evidence – so make sure you tell them what that was in your summary. This doesn’t mean that every evidence synthesis needs to use systematic review methodology – sometimes that will not be appropriate or practical – it just means that you need to be open about the approach(es) that were used.

3. Provide a list of – and if possible hyperlinks to – references

Evidence diagrams can help policy makers understand the ‘lay of the land’ of evidence but policy makers may want to delve deeper into the evidence on a particular theme or area. So, even in a summary of synthesised evidence, DO provide references (use a numbered reference style so they don’t take up too much space in the text).

4. Provide an easy to understand overview of the weight of evidence

Policy makers need to get an understanding of not just how much evidence there is, but also what the quality of the research results are so that they know how much faith to put in them. When you carry out your synthesis, it will be important to use an appropriate method to appraise the quality of the evidence. See for example the GRADE method for assessing health research or this how-to note from DFID which can be used for social science research. And when you communicate your findings, it is important that you convey this as well. An effective way to do this is to use a bubble diagram which gives an easy to understand overview of what the evidence says and how strong the evidence is. A beautiful example of this illustrating the evidence for various food supplements can be found here and a rather less serious one, blatantly stolen from my colleague @evidencebroker, can be viewed below.

There is usually a lot less evidence to synthesise in development research but similar diagrams can be generated – see for example this DFID ‘Evidence brief’ or page 43 of this paper.

evidence graph for blog


5 Comments

Evidence synthesis – what has it ever done for us?

I have talked before about the danger of using results from single research studies to push for policy change. A more balanced view of the whole body of evidence can be gained by carrying out evidence synthesis.  Systematic reviews (or other rigorous synthesis approaches) attempt to gather, appraise and summarise bodies of evidence in a transparent way. By looking at a whole body of evidence, and appraising the rigour of the studies you are looking at, you can get more certainty about what is really going on.

Systematic reviews have long been used in the medical field and have been shown to provide more accurate results than relying on clinical ‘expertise’. In (non-medical) international development topics, rigorous synthesis is much less established; there are relatively few people with expertise in synthesis and the methodologies for synthesising social science research results, and in particular qualitative data, are still being developed.

Nevertheless, synthesised evidence is starting to reveal new and important information about international development topics. Here I summarise three important roles that synthesised evidence can play in improving development interventions.

1. It can tell us that something is true, which we didn’t realise was true

*OK so maybe everyone else knew about the existence of narwhals but it came as a bit of a surprise to me when I discovered them in a David Attenbourgh documentary last year...

*OK so maybe everyone else knew about the existence of narwhals but it came as a bit of a surprise to me when I discovered them in a David Attenborough documentary last year…

Evidence synthesis can sometimes reveal something to be true which an ‘unweighted’ or non-systematic view of the literature would not have revealed. A good example is this paper about decentralisation of services in developing countries. The authors conclude the following:“Many influential surveys have found that the empirical evidence of decentralization’s effects on service delivery is weak, incomplete and often contradictory. Our own unweighted reading of the literature concurs. But when we organize the evidence first by substantive theme, and then – crucially – by empirical quality and the credibility of its identification strategy, clear patterns emerge. Higher quality evidence indicates that decentralization increases technical efficiency across a variety of public services, from student test scores to infant mortality rates.” In other words, only by taking the evidence together and organising it by quality were the authors able to reveal the real role that decentralisation is playing.

george

Back in the 80s, we thought we knew it all…

2. It can tell us that something that we all thought was true is actually not true

A classic example of this was this systematic review published last year which showed that, contrary to popular belief in the development community, routine deworming has little impact on school attendance or school performance. Unsurprisingly, this finding was pretty controversial – development ‘experts’ had been waxing lyrical about deworming as a means for educational improvement for years. However, by looking at the evidence together and, crucially, looking at the quality of the evidence, the authors revealed a different story; much as we liked the idea of being able to improve educational outcomes with an inexpensive pill, the evidence revealed, it’s just not that simple.

This is me in high school – looking smug and not suspecting that quantum physics would make my head explode.

3. It can tell us that something we thought we fully understood, we actually don’t have a clue about

The Justice and Security Research Consortium recently carried out a synthesis of the evidence on the media and conflict (it is not quite published yet but a summary can be found here). They found a lot of papers which make big claims about the media’s role either in promoting and preventing conflict. The large body of literature making these claims could easily fool a busy policy maker into assuming that the links between the media and conflict were well-established. However, when the evidence was assessed for rigour, it was found that many of these papers were based only on opinion or theory and that the number of high quality research papers in this area was low. They summarised that, at present, it is not possible to confirm or refute the claims about the media’s role in conflict based on the available evidence. Now some might say that this is the problem with synthesis – it often just tells us that we don’t know anything much! But in fact for a policy maker it is very important to know whether an intervention is tried and tested in multiple contexts or whether it is an innovative strategy which may have impact but which it would be sensible to monitor closely.

So, synthesised evidence – it might not sound exciting, but it is actually revealing lots of exciting new things. To find out more about what synthesised evidence can tell us check out this database of international development systematic reviews. And watch out for a follow-up post on how synthesised evidence can be communicated effectively.