kirstyevidence

Musings on research, international development and other stuff


6 Comments

Does public (mis)understanding of science actually matter?

Many parents feel reassured by pseudoscientific treatments - although experts point out that a similar amount of reassurance could be achieved by investing the cost of treatments in wine and chocolate.

Babies suffer through a lot of bogus treatments for the sake of placebo-induced parental reassurance.

So, as regular readers know, I have recently become a mum. As I mentioned in my last post*, I was really shocked by how much pseudoscience is targeted at pregnant women. But four months after the birth, I have to tell you that it is not getting any better. What I find most concerning is just how mainstream the use of proven-not-to-work remedies are. Major supermarkets and chemists stock homeopathic teething powders; it is common to see babies wearing amber necklaces to combat teething; and I can’t seem to attend a mother and baby group without being told about the benefits of baby cranial osteopathy.

I find this preponderance of magical thinking kind of upsetting. I keep wondering why on earth we don’t teach the basics of research methodologies in high schools. But then sometimes I question whether my attitude is just yet another example of parents being judgey. I mean, other than the fact that people are wasting their money on useless treatments, does it really matter that people don’t understand research evidence? Is worrying about scientific illiteracy similar to Scottish people getting annoyed at English people who cross their hands at the beginning, rather than during the second verse, of Auld Lang Syne: i.e. technically correct but ultimately unimportant and a bit pedantic?

I guess that I have always had the hypothesis that it does matter; that if people are unable to understand the evidence behind medical interventions for annoying but self-limiting afflictions, they will also find it difficult to make evidence-informed decisions about other aspects of their lives. And crucially, they will not demand that policy makers back up their assertions about problems and potential solutions with facts.

But I have to admit that this is just my hypothesis.

So, my question to you is, what do you think? And furthermore, what are the facts? Is there any research evidence which has looked at the links between public ‘science/evidence literacy’ and decision making?? I’d be interested in your thoughts in the comments below. 

.

* Apologies by the way for the long stretch without posts – I’ve been kind of busy. I am happy to report though that I have been using my time to develop many new skills and can now, for example, give virtuoso performances of both ‘Twinkle Twinkle’ and ‘You cannae shove yer Granny’.†,‡

† For those of you unfamiliar with it, ‘You cannae shove yer Granny (aff a bus)’ is a popular children’s song in Scotland. No really. I think the fact that parents feel this is an important life lesson to pass on to their children tells you a lot about my country of birth…

‡ Incidentally, I notice that I have only been on maternity leave for 4 months and I have already resorted to nested footnotes in order to capture my chaotic thought processes. This does not bode well for my eventual reintegration into the world of work.


12 Comments

The arrival of mini-evidence!

pregnantReaders, I have to confess that I have been keeping a secret from you; for the last nine months, in my spare time, I have been growing a human!

I didn’t want to mention it before because I was feeling superstitious (yes, yes, I get the irony) – but I am now happy to announce the safe arrival of my son last week.

OLYMPUS DIGITAL CAMERANow, the perceptive amongst you will recognise that this post is just a poorly-disguised excuse for a proud new mum to show off a picture of her offspring (see right).

However, in an attempt to shoe-horn my news into the theme of my blog, I hereby present five things that pregnancy and childbirth have taught me about evidence-informed decision making:

1. Pregnancy is open-season for pseudo-science. I have been amazed at how otherwise sensible sources of information seem to be completely happy to promote dodgy quackery when talking about pregnancy. It is difficult to find a book or article about pregnancy problems which doesn’t eventually advocate trying homeopathy, reiki or some other daft treatment plan, while the pronouncements about what you can and cannot do while pregnant are often arbitrary and non-fact based. This article in the Guardian on this topic is great.

2. The best pseudo-scientific ‘fact’ I heard was the idea that foot massage could be dangerous during pregnancy since there is apparently an accupressure point on your foot that can induce early labour – which made me imagine a horde of reflexologists-gone-bad moonlighting as backstreet, alternative therapy abortionists.

3. The ubiquity of bad science in pregnancy-related advice is particularly disappointing considering the rich history of good research on pregnancy and childbirth. In fact, the Cochrane Collaboration was originally started in the 1980s in order to produce objective reviews of research in perinatal medicine.

4. Just as in policy making, lived experience can trump statistics. This is demonstrated by the number of mums who will assume that your experience will be the same as theirs despite the massive variation in ‘normal’ pregnancy and childbirth.

5. Those of you who work in the field of evidence-informed policy making may think you know a lot about the competing influences of evidence, beliefs, politics, prejudice, vested interests and so on in decision making. But you have not seen anything until you have spent some time browsing a mums’ online discussion forum…

The only thing that remains for me to say is that my blog posts might be a bit infrequent over the coming months – please bear with me as I might be a bit preoccupied. And, what’s that I hear you say? You would like to see another photo? Oh OK then, here you go!
OLYMPUS DIGITAL CAMERA


3 Comments

Beneficiary feedback: necessary but not sufficient?

One of the things I love about working in DFID is that people take the issue of beneficiary* feedback very seriously. Of course we don’t get it right all the time. But I like to think that the kind of externally designed, top-down, patronising solutions that are such a feature of the worst kind of development interventions (one word: BandAid**) are much less likely to be supported by the likes of DFID these days.

In fact, beneficiary feedback is so central to how we do our work that criticising it in any way can been seen as controversial; some may see it as tantamount to saying you hate poor people! So just to be clear, I think we can all agree that getting feedback from the people you are trying to help is a good thing. But we do need to be careful not to oversell what it can tell us. Here are a couple of notes of caution:

1. Beneficiary feedback may not be sufficient to identify a solution to a problem

problem cakeIt is of course vital to work with potential beneficiaries when designing an intervention to ensure that it actually meets their needs. However, it is worth remembering that what people tell you they need may not match what they will actually benefit from. Think about your own experience – are you always the best placed person to identify the solution to your problems? Of course not – because we don’t know what we don’t know. It is for that reason that you consult with others – friends, doctors, tax advisors etc. to help you navigate your trickiest problems.

I have come across this problem frequently in my work with policy making institutions (from the north and the south) that are trying to make better use of research evidence. Staff often come up with ‘solutions’ which I know from (bitter) experience will never work. For example, I often hear policy making organisations  identify that what they need is a new interactive knowledge-sharing platform – and I have also watched on multiple occasions as such a platform has been set up and has completely flopped because nobody used it.

2. Beneficiary feedback on its own won’t tell you if an intervention has worked

Evaluation methodologies – and in particular experimental and quasi-experimental approaches – have been developed specifically because just asking someone if an intervention has worked is a particularly inaccurate way to judge its effectiveness! Human beings are prone to a whole host of biases – check out this wikipedia entry for more biases than you ever realised existed. Of course, beneficiary feedback can and should form part of an evaluation but you need to be careful about how it is gathered – asking a few people who happen to be available and willing to speak to you is probably not going to give you a particularly accurate overview of user experience. The issue of relying on poorly sampled beneficiary feedback was at the centre of some robust criticisms of the Independent Commission for Aid Impact’s recent review of anti-corruption interventions – see Charles Kenny’s excellent blog on the matter here.

If you are trying to incorporate beneficiary feedback into a rigorous evaluation, a few questions to ask are: Have you used a credible sampling framework to select those you get feedback from? If not, there is a very high chance that you have got a biased sample – like it or not, the type of person who will end up being easily accessible to you as a researcher will tend to be an ‘elite’ in some way. Have you compared responses in your test group with responses from a group which represents a counterfactual situation? If not, you are at high risk of just capturing social desirability bias (i.e. the desire of those interviewed to please the interviewer). If gathering feedback using a translator, are you confident that the translator is accurately translating both what you are asking and the answers you get back? There are plenty of examples of translators who, in a misguided effort to help researchers, put their own ‘spin’ on the questions and/or answers.

Even once you have used a rigorous methodology to collect your beneficiary feedback, it may not be enough to tell the whole story. Getting feedback from people will only ever tell you about their perception of success. In many cases, you will also need to measure some more objective outcome to find out if an intervention has really worked. For example, it is common for people to conclude their capacity building intervention has worked because people report an increase in confidence or skills. But people’s perception of their skills may have little correlation with more objective tests of skill level. Similarly, those implementing behaviour change interventions may want to check if there has been a change in perceptions – but they can only really be deemed successful if an actual change in objectively measured behaviour is observed.

.

I guess the conclusion to all this is that of course it is important to work with the people you are trying to help both to identify solutions and to evaluate their success. But we also need to make sure that we don’t fetishise beneficiary feedback and as a result ignore the other important tools we have for making evidence-informed decisions.

.

* I am aware that ‘beneficiary’ is a problematic term for some people. Actually I also don’t love it – it does conjure up a rather paternalistic view of development. However, given that it is so widely used, I am going to stick with it for this blog. Please forgive me.

** I refuse to provide linklove to Bandaid but instead suggest you check out this fabulous Ebola-awareness song featured on the equally fabulous Africaresponds website.

 


1 Comment

The politics of evidence supply and demand

I have written before about the separate functions of evidence supply and demand. To recap, supply concerns the production and communication of research findings while demand concerns the uptake and usage of evidence. While this model can be a useful way to think about the process of evidence-informed policy making, it has been criticised for being too high level and not really explaining what evidence supply and demand looks like in the real world – and in particular in developing countries.

I was therefore really pleased to see this paper from the CLEAR centre at the University of Witwatersrand which examines in some detail what supply and demand for evidence, in this case specifically evaluation evidence, looks like in five African countries.

What is particularly innovative about this study is that they compare the results of their assessments of evaluation of supply and demand with a political economy analysis and come up with some thought-provoking ideas about how to promote the evidence agenda in different contexts. In particular, they divide their five case study countries into two broad categories and suggest some generalisable rules for how evidence fits in to each.

Developmental patrimonial: the ‘benevolent dictator’

Two of the countries – Ethiopia and Rwanda – they categorise as broadly developmental patrimonial. In these countries, there is strong centralised leadership with little scope for external actors to influence. Perhaps surprisingly, in these countries there is relatively high endogenous demand for evidence; the central governments have a strong incentive to achieve developmental outcomes in order to maintain the government’s legitimacy and therefore, at least in some cases, look for evaluation evidence to inform what they do. These countries also have relatively strong technocratic ministries which may be more able to deal with evidence than those in some other countries. It is important to point out that these countries are not consistently and systematically using research evidence to inform decisions and that in general they are more comfortable with impact evaluation evidence which has clear pre-determined goals rather than evidence which questions values. But there does seem to be some existing demand and perhaps the potential for more in the future. When it comes to supply of evaluations, the picture is less positive: although there are examples of good supply, in general there is a lack of expertise in evaluations, and most evaluations are led by northern experts.

Neopatrimonial: a struggle for power and influence

The other three countries – Malawi, Zambia and Ghana – are categorised as broadly neopatrimonial. These countries are characterised by patronage-based decision making. There are multiple interest groups which are competing for influence and power largely via informal processes. Government ministries are weaker and stated policy may bear little relationship to what actually happens. Furthermore, line ministries are less influenced by Treasury and thus incentives for evidence from treasury are less likely to have an effect. However, the existance of multiple influential groups does mean that there are more diverse potential entry points for evidence to feed into policy discussions. Despite these major differences in demand for evidence, evaluation supply in these countries was remarkably similar to that in developmental patrimonial countries – i.e. some examples of good supply but in general relatively low capacity and reliance on external experts.

I have attempted to summarise the differences between these two categories of countries – as well as the commonalities – are summarised in the table below.

eval tableThere are a couple of key conclusions which I drew from this paper. Firstly, if we are interested in supporting the demand for evidence in a given country, it is vital to understand the political situation to identify entry points where there is potential to make some progress on use of evidence. The second point is that capacity to carry out evaluations remains very low despite a large number of evaluation capacity building initiatives. It will be important to understand whether existing initiatives are heading in the right direction and will produce stronger capacity to carry out evaluations in due course – or whether there is a need to rethink the approach.


4 Comments

Ebola-related rant

©EC/ECHO/Jean-Louis Mosser

©EC/ECHO/Jean-Louis Mosser

Warning: I will be making use of my blog for a small rant today. Normal service will resume shortly.

Like many others, I am getting very cross about coverage of Ebola. The first target for my ire are the articles (I won’t link to any of them because I don’t want to drive traffic there) that I keep seeing popping up on facebook and twitter suggesting that Ebola is not real and is in fact a western conspiracy designed to justify roll out of a vaccine which will kill off Africans. This kind of article is of course ignorant – but it is also highly insulting and dangerous. It is insulting to the thousands of health-care workers who are, as you are reading this, putting their lives on the line to care for Ebola patients in abysmal conditions. Those who are working in the riskiest conditions are health workers from the region. But it is worth noting that hundreds of people from outside Africa – including many government workers – are also volunteering to help and to suggest that their governments are actually the ones plotting the outbreak is particularly insulting. But even worse is the potential danger of these articles. They risk influencing those who have funds which could be invested in the response – and they also risk influencing those in affected countries to not take up a vaccine if one were developed.

This type of conspiracy theory is of course nothing new – the belief that HIV is ‘not real’ and/or invented by the west to kill off Africans is widely held across the continent. I have worked with many well-educated African policy makers who have subscribed to that belief. And it is a belief which has killed hundreds of thousands of people. The most famous example is of course in Thabo Mbeki’s South Africa when an estimated 300,000 people died of AIDS due to his erroneous beliefs. But I am sure the number is much higher if you were to consider other policy makers and religious leaders who have propagated these types of rumours and advised against taking effective anti-retroviral treatments.

The second thing that is really upsetting me is the implicit racism of some western coverage of the outbreak. I find it deeply depressing that, if you were to take this media coverage as indicative of people in the US and Europe’s interest, you would conclude that they only take Ebola seriously when it starts to affect people in their own country. It’s as if we are incapable of acknowledging our shared humanity with the people of Sierra Leone, Guinea and Liberia. Those are people just like us. People who have hopes and ambitions. People who love their children and get irritated by their mothers-in-law. People who crave happiness. People who are terrified of the prospect of dying an excruciating and undignified death. Why is the immense suffering of these people not enough to get our attention and sympathy?? How could we be so selfish as to get panicked by the incredibly unlikely prospect of the virus spreading in our countries when it already is spreading and causing misery to our fellow human beings???

I mean, if the media of America and Europe wanted to be evidence-informed about their selfishness, they would be spending their time worrying about things far more likely to kill us – cancer, obesity and even the flu. Or they could extend their empathy at least to their childrens’ generation and spend time worrying about global warming.

But even better, they could also ponder how it is possible that the best case scenario for the west’s response to the crisis is likely to be that thousands of people continue to die excruciating and undignified deaths but at least do so in ways less likely to infect others around them.

That is a pretty depressing prospect.

.

Edit: thanks to @davidsteven for pointing out that my original post was doing a diservice to the people of Europe and America by implying they were all uninterested in the plight of people in Africa. That is of course not true and many (most?) people are very concerned about what is happening. I have tried to edit above to clarify that it is the panic-stirring by the media that I am really moaning about.


2 Comments

Science to the rescue: doing the sums

money treeIn this final episode of my blog series on research and international development (to start at the beginning click here) I will consider the evidence on the economic returns to research investment. Of course, policy makers considering investing in research will not be satisfied just knowing whether it may lead to positive outcomes or not. Public spending always has an opportunity cost and decision makers will want to know whether investment in research is likely to lead to greater benefits that alternative uses of funds.

For decades, researchers have attempted to assess rates of return to investment in research and remarkably high figures have been calculated. The most studied type of research studied has been agricultural research and in this sector it is common to hear claims that the amount invested will lead to returns of 40-50% total investment per year for many years into the future. Such figures have been used widely to justify further research investment. But these results have been questioned by many.

To put the high rates of return which have been reported for agricultural research into perspective, a group from the University of Minnesota recently reported that if you used the median reported figure for rates of return to agricultural research and considered the amount of public investment in agriculture in the USA for the year 2000, you would expect the return by the year 2050 to be $208 quadrillion – or 1,400 times the projected GDP of the entire world!

So why the dodgy numbers?

Well the first answer is that it is methodologically quite challenging to calculate rates of return to research. Econometric analysis can be used to demonstrate that there are correlations between research investment and economic growth – but demonstrating causal links is far more challenging. Many attempts have been made to examine the cost of all research which has fed into the development of a particular new product or technology and to estimate rates of return based on the economic benefit that the new invention delivers – this approach is refered to as ‘simulation modeling’ in the literature review. The disadvantage of this method is that it is easy to only look at successful research which has led to the development of useful products and to ignore other ‘dead-end’ research. And of course this methodology excludes research which leads to benefits through other pathways that are not technological fixes.

A more controversial reason why the research in this area continues to be flawed is that the answers generated, even from flawed methodologies, have been politically convenient. In a 2010 article in Nature, News Editor Colin Macilwain commented:

“Beneath the rhetoric, . . . there is considerable unease that the economic benefits of science spending are being oversold. . . . The problem, economists say, is that the numbers attached to widely quoted economic benefits of research have been extrapolated from a small number of studies, many of which were undertaken with the explicit aim of building support for research investment, rather than being objective assessments.”

Having said that, the demand for rates of return is unlikely to diminish and there are some signs that researchers are innovating to improve the accuracy of the figures calculated. Pioneering studies of medical research (here and here) in the UK examined entire sectors of research – thus overcoming the tendency to focus only on ‘success stories’ – and compared costs with a portfolio of benefits. The methodology is promising although be aware that in the first study they add a large ‘fudge-factor’ number (30%) to the overall result to account for spillover effects – this number is based on previous agricultural research studies.

Another UK-based group has published a heroic analysis of the impacts of social science research using the price that people are willing to pay for research expertise as an indicator of its economic benefit. And the Minnesota group mentioned above has recently published an important paper looking at the reinvestment of profits in models of research investment.

These promising developments suggest that future policy makers may have figures on rates of return to research which are a more true reflection of reality.

So, as we come to the end of this marathon blog run, what have we learnt?

Well, overall the picture on research’s contribution to development is mixed. Research does have important and even transformational effects. Involvement in research develops key skills that are crucial for growth; inventions such as drugs and new agricultural technologies benefit millions of people; and research evidence can inform and thus improve policy and programme decisions. However there are also some notes of caution – and some widely held beliefs about research which do not appear to stand up to scrutiny.

Firstly, the idea that public investment to stimulate research and innovation will lead to economic growth is hard to justify. It seems that building skills in understanding research and problem solving might be more useful strategies.

Secondly, the widely held assumption that investing in research is a good way to improve tertiary education provision is not backed up by the available evidence. Evidence from high-income contexts suggests that this is not the case and at present there is no good evidence from low-income countries. Having said that, research can and does lead to improvements in human capital but this needs to be planned for and supported.

Thirdly, while research has delivered some remarkable improvements to the lives of poor people, there is also evidence of a tendency amongst donors to use research to develop technical fixes without fully understanding the nature of the problem they are trying to solve.

And finally, the focus on supplying and communicating research in order to drive evidence-informed policy will need to be matched with efforts to build capacity, incentives and systems to use that research if positive impacts are to be maximised.


2 Comments

Science to the rescue: evidence-informed policy

In part 5 of my series of blogs on research and international development (to start at the beginning, click here) I return to familiar territory: evidence-informed policy and practice.

Investments in development research from both donors and low-income country governments have increasingly been justified as a means to support better development outcomes achieved due to more evidence-informed policy and practice. The World Bank, for example, justifies its investment in research mainly on this basis, stating:

“Bank research […] is directed toward recognized and emerging policy issues and is focused on yielding better policy advice.”

A "What works" decision

A “What works” decision

There are numerous examples where research evidence has had positive impacts on policy. In the literature review, two major ways in which policy can be informed by evidence are highlighted. The most common understanding of evidence-informed policy is use of evidence to understand ‘what works’. Examples of this include recent changes in donor funding for microfinance programmes based on an emerging body of evidence about their effectiveness; the general switch to providing malaria bednets for free rather than for a small charge based on evidence about the usefulness of user fees for health services; and the shift in attention from getting kids into schools to focussing on learning achievement based on evidence that increased attendance was not significantly improving learning.

Evidence informing a policy maker's view of the world

Evidence informing a policy maker’s view of the world

The second way in which evidence can inform policy is perhaps less recognised, but arguably just as (or more?) important: evidence can also be used to inform decision makers understanding of context – their understanding and conceptualisation of the world around them; what the policy priorities are and how they interact; and their beliefs about what should be done. If you like, this is their implicit, internal ‘theory of change’ and it is likely to have a big impact on their decision-making. This type of evidence use was highlighted in an article examining use of evidence by DFID advisors in conflict zones who:

“[…] spoke about the influence of research through process of ‘osmosis and seepage’ and ‘selective absorption’ whereby they come into contact with concepts ‘floating around’ and generally shaping the debate”

This ‘seepage’ of research can occur by decision makers keeping up to date with the academic literature – but perhaps even more important is the role played by ‘thought leaders’ – current or former academics who can have a huge impact on people’s beliefs, narratives and conceptual frameworks – see further discussion in the blog examining human capital.

The increase in ‘evidence-informed policy’ rhetoric has been matched by a remarkable increase in activities to promote the communication, uptake and impact of research. However, it should be noted that this drive to ensure research leads to impact can also have unintended negative consequences. A key tenet of evidence-informed approaches is that decisions should be based on the body of evidence – however, there has been a tendency for research funders to incentivise researchers to push out the findings of their individual research studies without referencing the wider evidence base. This is particularly dangerous when researchers have not used the most rigorous research approaches and when policy makers do not have the necessary capacity to appraise the quality of the outputs. On a number of key development topics, the result you get is related to the rigour of your research approach. For example, low-quality studies on community-driven development tend to report that they are far more effective in achieving their aims than more rigorous evaluations.

Of course, it is necessary that research is effectively communicated – however, there is a growing recognition that decision makers and the organisations in which they work also need are able to understand, appraise and use the whole body of research. There is evidence that the incentives to use research in most developing country policy making institutions is low – although some would argue that is not significantly different to more developed countries. What does differ is the individual and organisational capacity to make use of research evidence. In a synthesis study of policy debates from four African countries, Emma Broadbent of the Overseas Development Institute highlights that:

“Even when it is used, research is often poorly referenced and seemingly selective; the full implications of research findings are poorly understood; and the logical leap required to move from research cited in relation to a specific policy problem (e.g. HIV/AIDS transmission trends in Uganda) to the policy prescription or solution proposed (e.g. the criminalisation of HIV/AIDS transmission) is often vast.”

This finding is consistent with a growing number of studies indicating that many policy making institutions in developing countries lack individuals with the necessary training to effectively find and appraise research evidence and decision-making systems which incorporate scrutiny of the evidence.

In conclusion, there is evidence that research evidence can lead to policy and programme improvements; however, there is also evidence that, unless we support the ‘demand’ for research – as well as the supply –, the outputs of many research programmes will not have the positive impacts we intend.

Concluding blog of the series coming up tomorrow!

Part 6 available here.