kirstyevidence

Musings on research, international development and other stuff


6 Comments

Does public (mis)understanding of science actually matter?

Many parents feel reassured by pseudoscientific treatments - although experts point out that a similar amount of reassurance could be achieved by investing the cost of treatments in wine and chocolate.

Babies suffer through a lot of bogus treatments for the sake of placebo-induced parental reassurance.

So, as regular readers know, I have recently become a mum. As I mentioned in my last post*, I was really shocked by how much pseudoscience is targeted at pregnant women. But four months after the birth, I have to tell you that it is not getting any better. What I find most concerning is just how mainstream the use of proven-not-to-work remedies are. Major supermarkets and chemists stock homeopathic teething powders; it is common to see babies wearing amber necklaces to combat teething; and I can’t seem to attend a mother and baby group without being told about the benefits of baby cranial osteopathy.

I find this preponderance of magical thinking kind of upsetting. I keep wondering why on earth we don’t teach the basics of research methodologies in high schools. But then sometimes I question whether my attitude is just yet another example of parents being judgey. I mean, other than the fact that people are wasting their money on useless treatments, does it really matter that people don’t understand research evidence? Is worrying about scientific illiteracy similar to Scottish people getting annoyed at English people who cross their hands at the beginning, rather than during the second verse, of Auld Lang Syne: i.e. technically correct but ultimately unimportant and a bit pedantic?

I guess that I have always had the hypothesis that it does matter; that if people are unable to understand the evidence behind medical interventions for annoying but self-limiting afflictions, they will also find it difficult to make evidence-informed decisions about other aspects of their lives. And crucially, they will not demand that policy makers back up their assertions about problems and potential solutions with facts.

But I have to admit that this is just my hypothesis.

So, my question to you is, what do you think? And furthermore, what are the facts? Is there any research evidence which has looked at the links between public ‘science/evidence literacy’ and decision making?? I’d be interested in your thoughts in the comments below. 

.

* Apologies by the way for the long stretch without posts – I’ve been kind of busy. I am happy to report though that I have been using my time to develop many new skills and can now, for example, give virtuoso performances of both ‘Twinkle Twinkle’ and ‘You cannae shove yer Granny’.†,‡

† For those of you unfamiliar with it, ‘You cannae shove yer Granny (aff a bus)’ is a popular children’s song in Scotland. No really. I think the fact that parents feel this is an important life lesson to pass on to their children tells you a lot about my country of birth…

‡ Incidentally, I notice that I have only been on maternity leave for 4 months and I have already resorted to nested footnotes in order to capture my chaotic thought processes. This does not bode well for my eventual reintegration into the world of work.


1 Comment

The politics of evidence supply and demand

I have written before about the separate functions of evidence supply and demand. To recap, supply concerns the production and communication of research findings while demand concerns the uptake and usage of evidence. While this model can be a useful way to think about the process of evidence-informed policy making, it has been criticised for being too high level and not really explaining what evidence supply and demand looks like in the real world – and in particular in developing countries.

I was therefore really pleased to see this paper from the CLEAR centre at the University of Witwatersrand which examines in some detail what supply and demand for evidence, in this case specifically evaluation evidence, looks like in five African countries.

What is particularly innovative about this study is that they compare the results of their assessments of evaluation of supply and demand with a political economy analysis and come up with some thought-provoking ideas about how to promote the evidence agenda in different contexts. In particular, they divide their five case study countries into two broad categories and suggest some generalisable rules for how evidence fits in to each.

Developmental patrimonial: the ‘benevolent dictator’

Two of the countries – Ethiopia and Rwanda – they categorise as broadly developmental patrimonial. In these countries, there is strong centralised leadership with little scope for external actors to influence. Perhaps surprisingly, in these countries there is relatively high endogenous demand for evidence; the central governments have a strong incentive to achieve developmental outcomes in order to maintain the government’s legitimacy and therefore, at least in some cases, look for evaluation evidence to inform what they do. These countries also have relatively strong technocratic ministries which may be more able to deal with evidence than those in some other countries. It is important to point out that these countries are not consistently and systematically using research evidence to inform decisions and that in general they are more comfortable with impact evaluation evidence which has clear pre-determined goals rather than evidence which questions values. But there does seem to be some existing demand and perhaps the potential for more in the future. When it comes to supply of evaluations, the picture is less positive: although there are examples of good supply, in general there is a lack of expertise in evaluations, and most evaluations are led by northern experts.

Neopatrimonial: a struggle for power and influence

The other three countries – Malawi, Zambia and Ghana – are categorised as broadly neopatrimonial. These countries are characterised by patronage-based decision making. There are multiple interest groups which are competing for influence and power largely via informal processes. Government ministries are weaker and stated policy may bear little relationship to what actually happens. Furthermore, line ministries are less influenced by Treasury and thus incentives for evidence from treasury are less likely to have an effect. However, the existance of multiple influential groups does mean that there are more diverse potential entry points for evidence to feed into policy discussions. Despite these major differences in demand for evidence, evaluation supply in these countries was remarkably similar to that in developmental patrimonial countries – i.e. some examples of good supply but in general relatively low capacity and reliance on external experts.

I have attempted to summarise the differences between these two categories of countries – as well as the commonalities – are summarised in the table below.

eval tableThere are a couple of key conclusions which I drew from this paper. Firstly, if we are interested in supporting the demand for evidence in a given country, it is vital to understand the political situation to identify entry points where there is potential to make some progress on use of evidence. The second point is that capacity to carry out evaluations remains very low despite a large number of evaluation capacity building initiatives. It will be important to understand whether existing initiatives are heading in the right direction and will produce stronger capacity to carry out evaluations in due course – or whether there is a need to rethink the approach.


4 Comments

Ebola-related rant

©EC/ECHO/Jean-Louis Mosser

©EC/ECHO/Jean-Louis Mosser

Warning: I will be making use of my blog for a small rant today. Normal service will resume shortly.

Like many others, I am getting very cross about coverage of Ebola. The first target for my ire are the articles (I won’t link to any of them because I don’t want to drive traffic there) that I keep seeing popping up on facebook and twitter suggesting that Ebola is not real and is in fact a western conspiracy designed to justify roll out of a vaccine which will kill off Africans. This kind of article is of course ignorant – but it is also highly insulting and dangerous. It is insulting to the thousands of health-care workers who are, as you are reading this, putting their lives on the line to care for Ebola patients in abysmal conditions. Those who are working in the riskiest conditions are health workers from the region. But it is worth noting that hundreds of people from outside Africa – including many government workers – are also volunteering to help and to suggest that their governments are actually the ones plotting the outbreak is particularly insulting. But even worse is the potential danger of these articles. They risk influencing those who have funds which could be invested in the response – and they also risk influencing those in affected countries to not take up a vaccine if one were developed.

This type of conspiracy theory is of course nothing new – the belief that HIV is ‘not real’ and/or invented by the west to kill off Africans is widely held across the continent. I have worked with many well-educated African policy makers who have subscribed to that belief. And it is a belief which has killed hundreds of thousands of people. The most famous example is of course in Thabo Mbeki’s South Africa when an estimated 300,000 people died of AIDS due to his erroneous beliefs. But I am sure the number is much higher if you were to consider other policy makers and religious leaders who have propagated these types of rumours and advised against taking effective anti-retroviral treatments.

The second thing that is really upsetting me is the implicit racism of some western coverage of the outbreak. I find it deeply depressing that, if you were to take this media coverage as indicative of people in the US and Europe’s interest, you would conclude that they only take Ebola seriously when it starts to affect people in their own country. It’s as if we are incapable of acknowledging our shared humanity with the people of Sierra Leone, Guinea and Liberia. Those are people just like us. People who have hopes and ambitions. People who love their children and get irritated by their mothers-in-law. People who crave happiness. People who are terrified of the prospect of dying an excruciating and undignified death. Why is the immense suffering of these people not enough to get our attention and sympathy?? How could we be so selfish as to get panicked by the incredibly unlikely prospect of the virus spreading in our countries when it already is spreading and causing misery to our fellow human beings???

I mean, if the media of America and Europe wanted to be evidence-informed about their selfishness, they would be spending their time worrying about things far more likely to kill us – cancer, obesity and even the flu. Or they could extend their empathy at least to their childrens’ generation and spend time worrying about global warming.

But even better, they could also ponder how it is possible that the best case scenario for the west’s response to the crisis is likely to be that thousands of people continue to die excruciating and undignified deaths but at least do so in ways less likely to infect others around them.

That is a pretty depressing prospect.

.

Edit: thanks to @davidsteven for pointing out that my original post was doing a diservice to the people of Europe and America by implying they were all uninterested in the plight of people in Africa. That is of course not true and many (most?) people are very concerned about what is happening. I have tried to edit above to clarify that it is the panic-stirring by the media that I am really moaning about.


4 Comments

Scottish independence and the falacy of evidence-BASED policy

indyrefAs I may have mentioned before, I am a proud Scot. I have therefore been following with interest the debates leading up to the Scottish referendum on independence which will take place on the 18th September (for BBC coverage see here or, more entertainingly, watch this fabulous independence megamix). Since I live in England, I don’t get to vote – and even if I did, as a serving civil servant it would not be appropriate for me to discuss my view here. But I do think the independence debate highlights some important messages about evidence and policy making – namely the fact that policy can not be made BASED on evidence alone.

The main reason for this is that before you make a policy decision you need to decide what policy outcome you wish to achieve – and this decision will be influenced by a whole range of factors including your beliefs, your political views, your upbringing etc. etc. So in the case of the independence debate, as eloquently pointed out by @cairneypaul in this blog, the people of Scotland need to decide what their priorities for the future of Scotland will be. Some will feel that financial stability is the priority, others will focus on the future of the Trident nuclear deterant, some will focus on their desire for policy decisions to be made locally, while others will care most about preservation of a historic union.

Only once people are aware of what their priorities are, will evidence really come in to play. In an ideal world there would then be a perfect evidence base which would provide an answer on which option (yes or no) would be most likely to lead to different policy outcome(s). But of course we all know that we don’t live in an ideal world, and so in the independence debate – as in most policy decisions – the evidence is contradictory, incomplete and contested. And therefore a second reason why a decision cannot be fully ‘evidence-based‘ is that voters will need to assess the evidence, and a certain degree of subjectivity will inevitably come into this appraisal.

It is for the above reasons that I strongly prefer the term ‘evidence-informed’ to the term ‘evidence-based’*. Evidence-informed decision making IS possible – it involves decision makers consulting and appraising a range of evidence sources and using the information to inform their decision. As such, two policy makers may make completely different policy decisions which have both been fully informed by the evidence. Likewise, my decision to happily eat a large slice of chocolate cake instead of going to the gym can be completely evidence-informed since I get to choose which outcomes I am seeking :-).

A final point is that since evidence can inform policies designed to lead to diverse outcomes, evidence-informed policy making is not inevitably a ‘good thing’; if a policy maker has nefarious aims, she can use evidence to help her achieve these in the same way that a more altruistic policy maker can use evidence to benefit others. Thus efforts to support evidence-informed policy will only be beneficial when those making decisions are actually motivated to improve the lives of others.

.

*n.b. I am a big supporter of the ‘evidence-based policy in development’ network since I suspect the name choice is mainly historical rather than a statement of policy. In fact, judging by discussions via the listserve, I would suspect that most members prefer the term evidence-informed policy.

 


8 Comments

Unintended consequences: When research impact is bad for development

Development research donors are obsessed with achieving research impact and researchers themselves are feeling increasingly pressurised to prioritise communication and influence over academic quality.

To understand how we have arrived at this situation, let’s consider a little story…

Let’s imagine around 20 years ago an advisor in an (entirely hypothetical) international development agency. He is feeling rather depressed – and the reason for this is that despite the massive amount of money that they are putting into international development efforts, it still feels like a Sisyphean task. He is well aware that poverty and suffering are rife in the world and he wonders what on earth to do. Luckily this advisor is sensible and realises that what is needed is some research to understand better the contexts in which they are working and to find out what works.

Fast-forward 10 or so years and the advisor is not much happier. The problem is that lots of money has been invested in research but it seems to just remain on the shelf and isn’t making a significant impact on development. And observing this, the advisor decides that we need to get better at promoting and pushing out the research findings. Thus (more or less!) was born a veritable industry of research communication and impact. Knowledge-sharing portals were established, researchers were encouraged to get out there and meet with decision makers to ensure their findings were taken into consideration, a thousand toolkits on research communications were developed and a flurry of research activity researching ‘research communication’ was initiated.

dfid advisorBut what might be the unintended consequences of this shift in priorities? I would like to outline three case studies which demonstrate why the push for research impact is not always good for development.

First let’s look at a few research papers seeking to answer an important question in development: does decentralisation improve provision of public services. If you were to look at this paper, or this one or even this one, you might draw the conclusion that decentralisation is a bad thing. And if the authors of those papers had been incentivised to achieve impact, they might have gone out to policy makers and lobbied them not to consider decentralisation. However, a rigorous review of the literature which considered the body of evidence found that, on average, high quality research studies on decentralisation demonstrate that it is good for service provision. A similar situation can be found for interventions such as microfinance or Community Driven Development – lots of relatively poor quality studies saying they are good, but high quality evidence synthesis demonstrating that overall they don’t fulfil their promise.

My second example comes from a programme I was involved in a few years ago which aimed to bring researchers and policy makers together. Such schemes are very popular with donors since they appear to be a tangible way to facilitate research communication to policy makers. An evaluation of this scheme was carried out and one of the ‘impacts’ it reported on was that one policy maker had pledged to increase funding in the research institute of one of the researchers involved in the scheme. Now this may have been a good impact for the researcher in question – but I would need to be convinced that investment in that particular research institution happened to be the best way for that policy maker to contribute to development.

My final example is on a larger scale. Researchers played a big role in advocating for increased access to anti-HIV drugs, particularly in Africa. The outcome of this is that millions more people now have access to those drugs, and on the surface of it that seems to be a wholly wonderful thing. But there is an opportunity cost in investment in any health intervention – and some have argued that more benefit could be achieved for the public if funds in some countries were rebalanced towards other health problems. They argue that people are dying from cheaply preventable diseases because so much funding has been diverted to HIV. It is for this reason we have NICE in the UK to evaluate the cost-effectiveness of new treatments.

What these cases have in common is that in each case I feel it would be preferable for decision makers to consider the full body of evidence rather than being influenced by one research paper, researcher or research movement. Of course I recognise that this is a highly complicated situation. I have chosen three cases to make a point but there will be many more cases where researchers have influenced policy on the basis of single research studies and achieved competely positive impacts. I can also understand that a real worry for people who have just spent years trying to encourage researchers to communicate better is that the issues I outline here could cause people to give up on all their efforts and go back to their cloistered academic existence. And in any case, even if pushing for impact were always a bad thing, publically funded donors would still need to have some way to demonstrate to tax payers that their investments in research were having positive effects.

So in the end, my advice is something of a compromise. Most importantly, I think researchers should make sure they are answering important questions, using the methods most suitable to the question. I would also encourage them to communicate their findings in the context of the body of research. Meanwhile, I would urge donors to continue to support research synthesis – to complement their investments in primary research. And to support policy making processes which include consideration of bodies of research.


5 Comments

Make love not war: bringing research rigour and context together

I’ve just spent a few days in Indonesia having meetings with some fascinating people discussing the role of think tanks in supporting evidence-informed policy. It was quite a privilege to spend time with people who had such deep and nuanced understanding of the ‘knowledge sectors’ in different parts of the world (and if you are interested in learning more, I would strongly recommend you check out some of their blogs here, here and here).

However, one point of particular interest within the formal meetings was that research quality/rigour often seemed to be framed in opposition to considerations of relevance and context. I was therefore interested to see that Lant Pritchett has also just written a blog with essentially the same theme – making the point that research rigour is less important than contextual relevance.

I found this surprising – not because I think context is unimportant – but because I do not see why the argument needs to be dichotomous. Research quality and research relevance are two important issues and the fact that some research is not contextually relevant does not in any way negate the fact that some research is not good quality.

How not to move a discussion forward

To illustrate this, let’s consider a matrix comparing quality with relevance.

Low Quality High Quality
Low contextual understanding The stuff which I think we can all agree is pointless Rigorous research which is actually looking at   irrelevant/inappropriate questions due to poor understanding of context
High contextual understanding Research which is based on deep understanding of context   but which is prone to bias due to poor methodology The good stuff! Research which is informed by good contextual understanding and which uses high quality methods to investigate   relevant questions.

Let me give some examples from each of these categories:

Low quality low contextual understanding

I am loath to give any examples for this box since it will just offend people – but I would include in this category any research which involves a researcher with little or no understanding of the context ‘parachuting in’ and then passing off their opinions as credible research.

High quality, low contextual understanding

An example of this is here – a research study on microbicides to prevent the transmission of HIV which was carried out in Zambia. This research used an experimental methodology – the most rigorous approach one can use when seeking to prove causal linkages. However the qualitatitve research strand which was run alongside the trial demonstrated that due to poor understanding of sexual behaviours in the context they were working in, the experimental data were flawed.

Low quality, high contextual understanding

An example of this is research to understand the links between investment in research and the quality of university education which relies on interviews and case studies with academics. These academics have very high understanding of the context of the university sector and you can therefore see why people would choose to ask them this questions. However repeated studies show that academics almost universally believe that investment in research is crucial to drive up the quality of education within universities while repeated rigorous empirical studies, reveal that the relationship between research and education quality is actually zero.

High quality, high contextual understanding

An example here could be this set of four studies of African policy debates. The author spent extended periods of time in each location and made every effort to understand the context – but she also used high quality qualitative research methods to gather her data. Another example could be the CDD paper I have blogged about before where an in-depth qualitative approach to understand context was combined with a synthesis of high-quality experimental research evidence. Or the research described in this case study – an evaluation carried out in Bolivia which demonstrates how deep contextual understanding and research rigour can be combined to achieve impact.

Some organisations will be really strong on relevance but be producing material which is weak methodologically and therefore prone to bias. This is dangerous since – as described above – poor quality research may well give answers – but they may be entirely the wrong answers to the questions posed. Other organisations will be producing stuff which is highly rigorous but completely irrelevant. Again, this is at best pointless and at worst dangerous if decision makers do not recognise that it is irrelevant to the questions they are grappling with.

In fact, the funny thing is that when deciding whether to concentrate more on improving research relevance or research quality… context matters! The problem of poor quality and the problem of low contextual relevance both occur and both reduce the usefulness of the research produced – and arguing about which one is on average more damaging is not going to help improve that situation.

One final point that struck me from reading the Pritchett blog is that he appears to have a fear that a piece of evidence which shows that something works in one context will be mindlessly used to make the argument that the same intervention should be used in another. In other words, there is a concern that rigorous evidence will be used to back up normative policy advice. If evidence were to be used in that way, I would also be afraid of it – but that is fundamentally not what I consider to be evidence-informed policy making. In fact, I disagree that any research evidence ever tells anyone what they should do. Thus, I agree with Pritchett that evidence of the positive impact of low class sizes in Israel does not provide the argument that class sizes should be lowered in Kenya. But I would also suggest that such evidence does not necessarily mean that policy makers in Israel should lower class sizes. This evidence provides some information which policy makers in either context may wish to consider – hence evidence-informed policy making. The Israeli politicians may come to the conclusion that the evidence of the benefit of low class sizes is relatively strong in their context. However, they may well make a decision not to lower class sizes due to other factors – for example finances. I would still consider this decision to be evidence-informed. Conversely, the policy makers in Kenya may look at the Israeli evidence and conclude that it refers to a different context and that it may therefore not provide a useful prediction of what will happen in Kenya – however, they may decide that it is sufficient to demonstrate that in some contexts lower class sizes can improve outcomes and that that is sufficient evidence for them to take a decision to try the policy out.

In other words, political decisions are always based on multiple factors – evidence will only ever be one of them. And evidence from alternative contexts can still provide useful information – providing you don’t overinterpret that information and assume that something that works in one context will automatically transfer to another.


1 Comment

Guest post on Pritchett Sandefur paper

Readers, I am delighted to introduce my first ever guest post. It is from my colleague Max – who can be found lurking on twitter as @maximegasteen – and it concerns the recent Pritchett/Sandefur paper. Enjoy! And do let us know your thoughts on the paper in the comments.

Take That Randomistas: You’re Totally Oversimplifying Things… (so f(x)=a_0+∑_(n=1)^∞(a_n  cosnπx/L+b_n  sinnπx/L)…)

Internal validity is great - but it's not everything! (Find more fab evaluation cartoons on freshspectrum.com)

The quest for internal validity can sometimes go too far…
(Find more fab evaluation cartoons on freshspectrum.com)

Development folk are always talking about “what works”. It’s usually around a research proposal saying “there are no silver bullets in this complex area” and then a few paragraphs later ending with a strong call “but we need to know what works”. It’s an attractive and intuitive rhetorical device. I mean, who could be against finding out ‘what works’? Surely no-one* wants to invest in something that doesn’t work?-.

Of course, like all rhetorical devices, “what works” is an over-simplification. But a new paper by Lant Pritchett and Justin Sandefur, Context Matters for Size, argues that this rhetorical device is not just simplistic, but actually dangerous for sensible policy making in development. The crux of the argument is that the primacy of methods for neat attribution of impact in development research and donors’ giddy-eyed enthusiasm when an RCT is dangled in front of their eyes leads to some potentially bad decisions.

Pritchett and Sandefur highlight cases where, on the basis of some very rigorous but limited evidence, influential researchers have pushed hard for the global scale-up of ‘proven’ interventions. The problem with this is that while RCTs can have very strong internal validity (i.e. they are good at demonstrating that a given factor leads to a given outcome) their external validity (i.e. the extent to which their findings can be generalised) is oftentimes open to question. Extrapolating from one very different context, often at small scale, to another context can be very misleading. They go on to use several examples from education to show that estimates using less rigorous methods, but in the local context are a better guide to the true impact of an intervention than a rigorous study from a different context.

All in all, a sensible argument. But that is kind of what bothers me. I feel like Pritchett and Sandefur have committed the opposite rhetorical sin to the “what works” brigade – making something more complicated than it needs to be. Sure, it’s helpful to counterbalance some of the (rather successful) self-promotion of the more hard-line randomistas’ favourite experiments, but I think this article swings too far in the opposite direction.

I think Pritchett and Sandefur do a slight disservice to people who support evidence-informed development (full disclosure: I am one of them) thinking they would blindly apply the results of a beautiful study from across the world in the context in which they work. At the same time (and here I will enter into the doing a disservice to the people working in development territory) I would love to be fighting my colleagues on the frontline who are trying to ignore good quality evidence from the local context in favour of excellent quality evidence from elsewhere. But in my experience I’ve faced the opposite challenge, where people designing programmes are putting more emphasis on dreadful local evidence to make incredible claims about the potential effectiveness of their programme (“we asked 25 people after the project if they thought things were better and 77.56% said it had improved by 82.3%” – the consultants masquerading as researchers who wrote this know who they are).

My bottom line on the paper? It’s a good read from some of the best thinkers on development. But it’s a bit like watching a series of The Killing – lots of detail, a healthy dose of false leads/strawmen but afterwards you’re left feeling a little bit bewildered – did I have to go through all that to find out not to trust the creepy guy who works at the removal company/MIT?

Having said that, it’s useful to always be reminded that the important question isn’t “does it work (somewhere)” but “did it work over there and would it work over here”.  I’d love to claim credit for this phrase, but sadly someone wrote a whole (very good) book about it.


*With the possible exception of Lyle Lanley who convinced everyone with a fancy song and dance routine to build a useless monorail in the Simpsons