kirstyevidence

Musings on research, international development and other stuff


2 Comments

Holding decision makers to account for evidence use

Evidence-informed policy – it’s a wonderful thing. But just how widespread is it? The ‘Show your workings’ report from the Institute of Government (and collaborators Sense About Science and the Alliance for Useful Evidence) has asked this question and concluded… not very. It states “there [are] few obvious political penalties for failing to base decision on the best available evidence”. I have to say that as a civil servant this rings true. It’s not that people don’t use evidence – actually most civil servants, at least where I work, do. But there are not good systems in place to distinguish between people who have systematically looked at the full body of evidence and appraised its strengths and weaknesses – and those who have referenced a few cherry-picked studies to back up their argument.

cat up tree

Rosie is my actual cat’s name. And she does indeed make many poor life decisions. Incidentally, I named my other cat ‘Mouse’ and now that I am trying to teach my child to identify animals I am wondering just how wise a life decision that was…

The problem for those scrutinising decision making – parliament, audit bodies and, in the case of development, the Independent Commission for Aid Impact – is that if you are not a topic expert it can be quite hard to judge whether the picture of evidence presented in a policy document does represent an impartial assessment of the state of knowledge. The IoG authors realised this was a problem quite early in their quest – and came up with a rather nifty solution. Instead of trying to decide if decisions are based on an unbiased assessment of evidence, they simply looked at how transparent decision makers had been about how they had appraised evidence.

Now, on the evidence supply side there has been some great work to drive up transparency. In the medical field, Ben Goldacre is going all guns blazing after pharmaceutical companies to get them to clean up their act. In international development, registers of evaluations are appearing and healthy debates are emerging on the nature of pre-analysis plans. This is vitally important – if evaluators don’t declare what they are investigating and how, it is far too easy for them to not bother publishing findings which are inconvenient – or to try multiple types of analysis until, by chance, one gives them a more agreeable answer.

But as the report shows, and as others have argued elsewhere, there has been relatively little focus on transparency on the ‘demand’ side. And by overlooking this, I think that we might have been missing a trick. You see, it turns out that the extent to which a policy document explicitly sets out how evidence has been gathered and appraised is a rather good proxy for systematic evidence appraisal. And the IoG’s hypothesis is that if you could hold decision makers to account for their evidence transparency, you could go some way towards improving the systematic use of evidence to inform decision makers.

The report sets out a framework which can be used to assess evidence transparency. As usual, I have a couple of tweaks I would love to see. I think it would be great if the framework included more explicitly an assessment of the search strategy used to gather the initial body of evidence – and perhaps rewarded people for making use of existing rigorous synthesis products such as systematic reviews. But in general, I think it is a great tool and I really hope the IoG et al. are successful in persuading government departments – and crucially those who scrutinise them – to make use of it.

 

Advertisements


6 Comments

Does public (mis)understanding of science actually matter?

Many parents feel reassured by pseudoscientific treatments - although experts point out that a similar amount of reassurance could be achieved by investing the cost of treatments in wine and chocolate.

Babies suffer through a lot of bogus treatments for the sake of placebo-induced parental reassurance.

So, as regular readers know, I have recently become a mum. As I mentioned in my last post*, I was really shocked by how much pseudoscience is targeted at pregnant women. But four months after the birth, I have to tell you that it is not getting any better. What I find most concerning is just how mainstream the use of proven-not-to-work remedies are. Major supermarkets and chemists stock homeopathic teething powders; it is common to see babies wearing amber necklaces to combat teething; and I can’t seem to attend a mother and baby group without being told about the benefits of baby cranial osteopathy.

I find this preponderance of magical thinking kind of upsetting. I keep wondering why on earth we don’t teach the basics of research methodologies in high schools. But then sometimes I question whether my attitude is just yet another example of parents being judgey. I mean, other than the fact that people are wasting their money on useless treatments, does it really matter that people don’t understand research evidence? Is worrying about scientific illiteracy similar to Scottish people getting annoyed at English people who cross their hands at the beginning, rather than during the second verse, of Auld Lang Syne: i.e. technically correct but ultimately unimportant and a bit pedantic?

I guess that I have always had the hypothesis that it does matter; that if people are unable to understand the evidence behind medical interventions for annoying but self-limiting afflictions, they will also find it difficult to make evidence-informed decisions about other aspects of their lives. And crucially, they will not demand that policy makers back up their assertions about problems and potential solutions with facts.

But I have to admit that this is just my hypothesis.

So, my question to you is, what do you think? And furthermore, what are the facts? Is there any research evidence which has looked at the links between public ‘science/evidence literacy’ and decision making?? I’d be interested in your thoughts in the comments below. 

.

* Apologies by the way for the long stretch without posts – I’ve been kind of busy. I am happy to report though that I have been using my time to develop many new skills and can now, for example, give virtuoso performances of both ‘Twinkle Twinkle’ and ‘You cannae shove yer Granny’.†,‡

† For those of you unfamiliar with it, ‘You cannae shove yer Granny (aff a bus)’ is a popular children’s song in Scotland. No really. I think the fact that parents feel this is an important life lesson to pass on to their children tells you a lot about my country of birth…

‡ Incidentally, I notice that I have only been on maternity leave for 4 months and I have already resorted to nested footnotes in order to capture my chaotic thought processes. This does not bode well for my eventual reintegration into the world of work.


3 Comments

Beneficiary feedback: necessary but not sufficient?

One of the things I love about working in DFID is that people take the issue of beneficiary* feedback very seriously. Of course we don’t get it right all the time. But I like to think that the kind of externally designed, top-down, patronising solutions that are such a feature of the worst kind of development interventions (one word: BandAid**) are much less likely to be supported by the likes of DFID these days.

In fact, beneficiary feedback is so central to how we do our work that criticising it in any way can been seen as controversial; some may see it as tantamount to saying you hate poor people! So just to be clear, I think we can all agree that getting feedback from the people you are trying to help is a good thing. But we do need to be careful not to oversell what it can tell us. Here are a couple of notes of caution:

1. Beneficiary feedback may not be sufficient to identify a solution to a problem

problem cakeIt is of course vital to work with potential beneficiaries when designing an intervention to ensure that it actually meets their needs. However, it is worth remembering that what people tell you they need may not match what they will actually benefit from. Think about your own experience – are you always the best placed person to identify the solution to your problems? Of course not – because we don’t know what we don’t know. It is for that reason that you consult with others – friends, doctors, tax advisors etc. to help you navigate your trickiest problems.

I have come across this problem frequently in my work with policy making institutions (from the north and the south) that are trying to make better use of research evidence. Staff often come up with ‘solutions’ which I know from (bitter) experience will never work. For example, I often hear policy making organisations  identify that what they need is a new interactive knowledge-sharing platform – and I have also watched on multiple occasions as such a platform has been set up and has completely flopped because nobody used it.

2. Beneficiary feedback on its own won’t tell you if an intervention has worked

Evaluation methodologies – and in particular experimental and quasi-experimental approaches – have been developed specifically because just asking someone if an intervention has worked is a particularly inaccurate way to judge its effectiveness! Human beings are prone to a whole host of biases – check out this wikipedia entry for more biases than you ever realised existed. Of course, beneficiary feedback can and should form part of an evaluation but you need to be careful about how it is gathered – asking a few people who happen to be available and willing to speak to you is probably not going to give you a particularly accurate overview of user experience. The issue of relying on poorly sampled beneficiary feedback was at the centre of some robust criticisms of the Independent Commission for Aid Impact’s recent review of anti-corruption interventions – see Charles Kenny’s excellent blog on the matter here.

If you are trying to incorporate beneficiary feedback into a rigorous evaluation, a few questions to ask are: Have you used a credible sampling framework to select those you get feedback from? If not, there is a very high chance that you have got a biased sample – like it or not, the type of person who will end up being easily accessible to you as a researcher will tend to be an ‘elite’ in some way. Have you compared responses in your test group with responses from a group which represents a counterfactual situation? If not, you are at high risk of just capturing social desirability bias (i.e. the desire of those interviewed to please the interviewer). If gathering feedback using a translator, are you confident that the translator is accurately translating both what you are asking and the answers you get back? There are plenty of examples of translators who, in a misguided effort to help researchers, put their own ‘spin’ on the questions and/or answers.

Even once you have used a rigorous methodology to collect your beneficiary feedback, it may not be enough to tell the whole story. Getting feedback from people will only ever tell you about their perception of success. In many cases, you will also need to measure some more objective outcome to find out if an intervention has really worked. For example, it is common for people to conclude their capacity building intervention has worked because people report an increase in confidence or skills. But people’s perception of their skills may have little correlation with more objective tests of skill level. Similarly, those implementing behaviour change interventions may want to check if there has been a change in perceptions – but they can only really be deemed successful if an actual change in objectively measured behaviour is observed.

.

I guess the conclusion to all this is that of course it is important to work with the people you are trying to help both to identify solutions and to evaluate their success. But we also need to make sure that we don’t fetishise beneficiary feedback and as a result ignore the other important tools we have for making evidence-informed decisions.

.

* I am aware that ‘beneficiary’ is a problematic term for some people. Actually I also don’t love it – it does conjure up a rather paternalistic view of development. However, given that it is so widely used, I am going to stick with it for this blog. Please forgive me.

** I refuse to provide linklove to Bandaid but instead suggest you check out this fabulous Ebola-awareness song featured on the equally fabulous Africaresponds website.

 


4 Comments

Ebola-related rant

©EC/ECHO/Jean-Louis Mosser

©EC/ECHO/Jean-Louis Mosser

Warning: I will be making use of my blog for a small rant today. Normal service will resume shortly.

Like many others, I am getting very cross about coverage of Ebola. The first target for my ire are the articles (I won’t link to any of them because I don’t want to drive traffic there) that I keep seeing popping up on facebook and twitter suggesting that Ebola is not real and is in fact a western conspiracy designed to justify roll out of a vaccine which will kill off Africans. This kind of article is of course ignorant – but it is also highly insulting and dangerous. It is insulting to the thousands of health-care workers who are, as you are reading this, putting their lives on the line to care for Ebola patients in abysmal conditions. Those who are working in the riskiest conditions are health workers from the region. But it is worth noting that hundreds of people from outside Africa – including many government workers – are also volunteering to help and to suggest that their governments are actually the ones plotting the outbreak is particularly insulting. But even worse is the potential danger of these articles. They risk influencing those who have funds which could be invested in the response – and they also risk influencing those in affected countries to not take up a vaccine if one were developed.

This type of conspiracy theory is of course nothing new – the belief that HIV is ‘not real’ and/or invented by the west to kill off Africans is widely held across the continent. I have worked with many well-educated African policy makers who have subscribed to that belief. And it is a belief which has killed hundreds of thousands of people. The most famous example is of course in Thabo Mbeki’s South Africa when an estimated 300,000 people died of AIDS due to his erroneous beliefs. But I am sure the number is much higher if you were to consider other policy makers and religious leaders who have propagated these types of rumours and advised against taking effective anti-retroviral treatments.

The second thing that is really upsetting me is the implicit racism of some western coverage of the outbreak. I find it deeply depressing that, if you were to take this media coverage as indicative of people in the US and Europe’s interest, you would conclude that they only take Ebola seriously when it starts to affect people in their own country. It’s as if we are incapable of acknowledging our shared humanity with the people of Sierra Leone, Guinea and Liberia. Those are people just like us. People who have hopes and ambitions. People who love their children and get irritated by their mothers-in-law. People who crave happiness. People who are terrified of the prospect of dying an excruciating and undignified death. Why is the immense suffering of these people not enough to get our attention and sympathy?? How could we be so selfish as to get panicked by the incredibly unlikely prospect of the virus spreading in our countries when it already is spreading and causing misery to our fellow human beings???

I mean, if the media of America and Europe wanted to be evidence-informed about their selfishness, they would be spending their time worrying about things far more likely to kill us – cancer, obesity and even the flu. Or they could extend their empathy at least to their childrens’ generation and spend time worrying about global warming.

But even better, they could also ponder how it is possible that the best case scenario for the west’s response to the crisis is likely to be that thousands of people continue to die excruciating and undignified deaths but at least do so in ways less likely to infect others around them.

That is a pretty depressing prospect.

.

Edit: thanks to @davidsteven for pointing out that my original post was doing a diservice to the people of Europe and America by implying they were all uninterested in the plight of people in Africa. That is of course not true and many (most?) people are very concerned about what is happening. I have tried to edit above to clarify that it is the panic-stirring by the media that I am really moaning about.


4 Comments

Scottish independence and the falacy of evidence-BASED policy

indyrefAs I may have mentioned before, I am a proud Scot. I have therefore been following with interest the debates leading up to the Scottish referendum on independence which will take place on the 18th September (for BBC coverage see here or, more entertainingly, watch this fabulous independence megamix). Since I live in England, I don’t get to vote – and even if I did, as a serving civil servant it would not be appropriate for me to discuss my view here. But I do think the independence debate highlights some important messages about evidence and policy making – namely the fact that policy can not be made BASED on evidence alone.

The main reason for this is that before you make a policy decision you need to decide what policy outcome you wish to achieve – and this decision will be influenced by a whole range of factors including your beliefs, your political views, your upbringing etc. etc. So in the case of the independence debate, as eloquently pointed out by @cairneypaul in this blog, the people of Scotland need to decide what their priorities for the future of Scotland will be. Some will feel that financial stability is the priority, others will focus on the future of the Trident nuclear deterant, some will focus on their desire for policy decisions to be made locally, while others will care most about preservation of a historic union.

Only once people are aware of what their priorities are, will evidence really come in to play. In an ideal world there would then be a perfect evidence base which would provide an answer on which option (yes or no) would be most likely to lead to different policy outcome(s). But of course we all know that we don’t live in an ideal world, and so in the independence debate – as in most policy decisions – the evidence is contradictory, incomplete and contested. And therefore a second reason why a decision cannot be fully ‘evidence-based‘ is that voters will need to assess the evidence, and a certain degree of subjectivity will inevitably come into this appraisal.

It is for the above reasons that I strongly prefer the term ‘evidence-informed’ to the term ‘evidence-based’*. Evidence-informed decision making IS possible – it involves decision makers consulting and appraising a range of evidence sources and using the information to inform their decision. As such, two policy makers may make completely different policy decisions which have both been fully informed by the evidence. Likewise, my decision to happily eat a large slice of chocolate cake instead of going to the gym can be completely evidence-informed since I get to choose which outcomes I am seeking :-).

A final point is that since evidence can inform policies designed to lead to diverse outcomes, evidence-informed policy making is not inevitably a ‘good thing’; if a policy maker has nefarious aims, she can use evidence to help her achieve these in the same way that a more altruistic policy maker can use evidence to benefit others. Thus efforts to support evidence-informed policy will only be beneficial when those making decisions are actually motivated to improve the lives of others.

.

*n.b. I am a big supporter of the ‘evidence-based policy in development’ network since I suspect the name choice is mainly historical rather than a statement of policy. In fact, judging by discussions via the listserve, I would suspect that most members prefer the term evidence-informed policy.

 


7 Comments

Implementation science: what is it and why should we care?

imp sci pie chart

The 30 participants were mostly members of DFID’s Evidence into Action team plus a few people who follow me on twitter – admittedly not a very rigorous sampling strategy but a useful quick and dirty view!

Last week I attended a day-long symposium on ‘implementation science’ organised by FHI 360. I had been asked by the organisers to give a presentation, and it was only after agreeing that it occurred to me that I really had no idea what implementation science was. It turns out I was not alone – I did a quick survey of colleagues engaged in the evidence-informed policy world and discovered that the majority of them were also unfamiliar with the term (see pie chart). And even when I arrived at the conference full of experts in the field, the first couple of hours were devoted to discussions about what implementation science does and does not include.

To summarise some very in-depth discussions, it seems that there are basically two ways to understand the term.

The definitions that seem most sensible to me describe implementation science as the study of how evidence-informed interventions are put into practice (or not) in real world settings. These definitions indicate that implementation science can only be done after efficacy and effectiveness studies have demonstrated that the intervention can have a positive impact. As @bjweiner (one of the conference speakers) said, implementation science aims to discover ‘evidence-informed implementation strategies for evidence-informed interventions’.

A second category of definitions take a much broader view of implementation science. These definitions include a wide variety of additional types of research including impact evaluations, behaviour change research and process evaluations within the category of implementation science. To be honest, I found this latter category of definitions rather unhelpful – they seemed to be so broad that almost anything could be labelled implementation science. So, I am going to choose to just go with the narrower understanding of the term.

Now I have to mention here that I thoroughly enjoyed the symposium and found implementation scientists to be a really fascinating group to talk with. And so, as a little gift back to them, and in recognition of the difficulties they are having in agreeing on a common definition, I have taken the liberty of creating a little pictorial definition of implementation science for them (below). I am sure they will be delighted with it and trust it will shortly become the new international standard ;-).
implementation science

So what else do you need to know about implementation science?

Well, it tends to be done in the health sector (although there are examples from other sectors) and it seems to focus on uptake by practitioners (i.e. health care providers) more than uptake by policy makers. In addition it is, almost by definition, quite ‘supply’-driven – i.e. it tends to focus on a particular evidence-informed intervention and then study how that can be implemented/scaled up. I am sure that this is often a very useful thing – however, I suspect that the dangers of supply-driven approaches that I have mentioned before will apply; in particular, there is a risk that the particular evidence-informed intervention chosen to be scaled up, may not represent the best overall use of funds in a given context. It is also worth noting that promoting and studying the uptake of one intervention may not have long-term impacts on how capable and motivated policy makers/practitioners are to take up and use research in general.

A key take home message for me was that implementation science is ALL about context. One of my favourite talks was given by @pierrembarker who described a study of the scale-up of HIV prevention care in South Africa. At first the study was designed as a cluster randomised controlled trial; however, as the study progressed, the researchers realised that, for successful implementation, they would need to vary the approach to scale-up depending on the local level conditions, and thus an RCT, which would require standardised procedures across study sites, would not be practical. Luckily, the researchers (and the funders) were smart enough to recognise that a change of plan was needed and the researchers came up with a new approach which enabled them to tailor the intervention to differing contexts, and at the same time generate evidence on outcomes which was as robust as feasible. Another great talk was given by Theresa Hoke of @FHI360 who described two programmes to scale up interventions that almost completely failed (paper about one of them here). The great thing about the implementation science studies were that they were able to demonstrate clearly that the scale-up had failed and to generate important clues for why this might be the case.

One final cool thing about implementation science is how multi-disciplinary it is; at the symposium I met clinicians, epidemiologists, qualitative social scientists and – perhaps most intriguingly – organisational psychologists. I was particularly interested in the latter because I think it would be really great if we could get some of these types involved in evaluating/investigating ‘demand-side’ evidence-informed policy work funded by organisations, including DFID, (the department formerly known as-) AusAID and AHSPR. These programmes are really all about driving organisational change, and it would be very useful to get an expert’s view on what approaches (if any!) can be taken by outside actors to catalyse and support this.

Anyway, sorry for such a long post but as you can tell I am really excited about my new discovery of implementation science! If you are too, I would strongly recommend checking out the (fully open access) Implementation Science Journal. I found the ‘most viewed’ articles a good place to start. You will also soon be able to check out the presentations from the symposium (including my talk in which I call for more unity between ‘evidence geeks’ like me and implementation scientists) here.