kirstyevidence

Musings on research, international development and other stuff


2 Comments

Holding decision makers to account for evidence use

Evidence-informed policy – it’s a wonderful thing. But just how widespread is it? The ‘Show your workings’ report from the Institute of Government (and collaborators Sense About Science and the Alliance for Useful Evidence) has asked this question and concluded… not very. It states “there [are] few obvious political penalties for failing to base decision on the best available evidence”. I have to say that as a civil servant this rings true. It’s not that people don’t use evidence – actually most civil servants, at least where I work, do. But there are not good systems in place to distinguish between people who have systematically looked at the full body of evidence and appraised its strengths and weaknesses – and those who have referenced a few cherry-picked studies to back up their argument.

cat up tree

Rosie is my actual cat’s name. And she does indeed make many poor life decisions. Incidentally, I named my other cat ‘Mouse’ and now that I am trying to teach my child to identify animals I am wondering just how wise a life decision that was…

The problem for those scrutinising decision making – parliament, audit bodies and, in the case of development, the Independent Commission for Aid Impact – is that if you are not a topic expert it can be quite hard to judge whether the picture of evidence presented in a policy document does represent an impartial assessment of the state of knowledge. The IoG authors realised this was a problem quite early in their quest – and came up with a rather nifty solution. Instead of trying to decide if decisions are based on an unbiased assessment of evidence, they simply looked at how transparent decision makers had been about how they had appraised evidence.

Now, on the evidence supply side there has been some great work to drive up transparency. In the medical field, Ben Goldacre is going all guns blazing after pharmaceutical companies to get them to clean up their act. In international development, registers of evaluations are appearing and healthy debates are emerging on the nature of pre-analysis plans. This is vitally important – if evaluators don’t declare what they are investigating and how, it is far too easy for them to not bother publishing findings which are inconvenient – or to try multiple types of analysis until, by chance, one gives them a more agreeable answer.

But as the report shows, and as others have argued elsewhere, there has been relatively little focus on transparency on the ‘demand’ side. And by overlooking this, I think that we might have been missing a trick. You see, it turns out that the extent to which a policy document explicitly sets out how evidence has been gathered and appraised is a rather good proxy for systematic evidence appraisal. And the IoG’s hypothesis is that if you could hold decision makers to account for their evidence transparency, you could go some way towards improving the systematic use of evidence to inform decision makers.

The report sets out a framework which can be used to assess evidence transparency. As usual, I have a couple of tweaks I would love to see. I think it would be great if the framework included more explicitly an assessment of the search strategy used to gather the initial body of evidence – and perhaps rewarded people for making use of existing rigorous synthesis products such as systematic reviews. But in general, I think it is a great tool and I really hope the IoG et al. are successful in persuading government departments – and crucially those who scrutinise them – to make use of it.

 

Advertisement


2 Comments

Jaded Aid – the sequel?

I’ve been a little bit delighted to see the publicity that the Jaded Aid card game has been generating (see for example this Foreign Policy write-up). Nothing is more loltastic for development workers that some wryly-observed development humour. As Wayan Vota observes in that FP article, it is affectionate humour; most of us care deeply about the work we are doing. But if we can’t laugh at some of the absurdities of our industries we might go mad (or explode with pomposity).

In this spirit, I’ve been thinking about what other business projects I could crowd-source funding for from the jaded aid generation. I think I have come up with some crackers. Here they are – in ascending order of cynicism 😉

Somehow I suspect that my business ideas wouldn't get me that far on Dragons' Den...

Somehow I suspect that my business ideas wouldn’t get me that far on Dragons’ Den…

1. Many people have questioned aid workers abilities to actually end world poverty – but surely no-one could deny their deep, contextual knowledge of long-haul flights and seedy business hotels. I mean, I don’t know anyone else as good as me at securing the best seat in economy class or blagging my way into business class lounges. So my first idea is to combine this latent travel knowledge with another skill which development workers have – creating online knowledge repositorieshttps://kirstyevidence.wordpress.com/2012/10/08/why-your-knowledge-sharing-portal-will-probably-not-save-the-world/. My one-stop-shop would enable seasoned development workers to mentor and share knowledge with long-haul tourists looking for exotic adventures. Development workers will get the satisfaction of being truly useful. And the boost to the tourist industry may benefit poor countries more than many misguided development projects: win win.
2. Wherever you go in the world, you get ethnic spas based on sanitised versions of indigenous health beliefs. So you get Thai spas with incense, Thai muzak and traditional Thai massages; Indian spas with ladies in saris, treatments inspired by Ayurveda and incense. And, to my surprise, I recently came across a ‘traditional African spa’ with treatments inspired by African traditional medicine carried out to the sound of the Soweto gospel choir. And incense.
People love these spas because it is well known that while people in developing countries may lack wealth, they are rich in indigenous wisdom, charmingly exotic practices… and incense.
So, my proposal is that we give something back to all these developing countries from which we have appropriated our luxury spa treatments. And what better gift than the marvelous indigenous health system of Germany: homeopathy*. I suggest that we set up our german spas across the developing world. Homeopathic massages will have been watered down to the extent that no actual massage is left. Instead, customers will sit in a room with a mulleted German masseuse listening to the relaxing sound of David Hasselhof – and sniffing bratwurst-scented incense. Health impacts will be mediated by the placebo effect – and the huge pitcher of German beer you will be given before leaving.
3. I have long felt a dilemma about gap-year voluntourism projects. On the one hand, I feel that sending under qualified people to carry out projects in poor countries can be patronising, unhelpful and potentially undermining to local economies. Or the other hand, I do think that it is useful for young, impressionable people to have the chance to connect with people from other cultures and (hopefully) to realise that people the world over are just people. I wondered if there is a way to enable this cross-cultural exchange without the patronising well-digging projects. Which is how I came up with the Mzungu** houseboy project. The idea is to link up earnest European gap year students with nouveau riche African families. To be precise we would need to find a particular subset of the newly wealthy who want to show off to their friends and family. The Europeans would get an authentic experience of poverty as a houseboy/girl – secure in the knowledge that they are not falling into the white saviour cliche. And the ostentatious families of Lagos, Nairobi or Kampala get the ultimate status symbol; a European houseboy! What is not to like?!

* A system of alternative medicine/quackery – invented in Germany – where active substances are diluted down to infinitesimally low concentrations.
**The word for a European/Caucasian (or sometimes foreign-resident Africans) in many Bantu languages


3 Comments

International development and the ‘f’ word

I’m not sure if I have mentioned it, but I am kinda into gender equity.

Or, more precisely, I am a card-carrying, misogyny-hating, bra-burning*, don’t-you-dare-tell-me-I-can’t-do-that-just-cus-I-have-a-uterus kind of feminist.

Most of my friends are similarly inclined** and thus, recently, I got into a discussion about how feminism relates to international development. We talked about two facets – how feminist is the international development movement in its actions and how feminist is the international development industry as an employer.

The answer to the first question seems obvious: development is obsessed with gender issues and supporting women and girls. Surely it is more feminist than Caitlin Moran reading the Female Eunuch while chanting suffragette slogans? Well, sometimes. It is true that many of those working on projects for women and girls, regardless of gender, are feminist in thought and action. But my friends and I also noted that projects targeting women are particularly susceptible to the ‘white saviour’ myth; there are some who love the idea of parachuting in to save poor vulnerable women from primitive conditions. This type of rhetoric frequently comes from men, but not always. In fact, it is particularly prevalent in that most un-feminist of publication – the women’s glossy magazine. Inserted between articles about why you should feel inadequate about your body or spend ridiculous amounts of money on your appearance, there is often an article about someone who has gone out to Africa to save the poor vulnerable and helpless women there.

breakout sessionsThis patronising approach is popular with celebrities aiming to show their caring side – but to some extent it can seep into serious development agencies. One of my friends, a gender specialist, described a recent development conference she had attended. She noted that during the tea breaks, there was a good mix of genders but when the breakout sessions started the (mainly male) economists and political scientists went off to discuss the meaty governance issues while the (mainly female) social development and gender specialists were ushered into rooms to discuss more ‘fluffy’ issues. It drove my friend mad. She didn’t think gender should be reduced to ’boutique’ projects about disadvantaged women but that rather there was a need to think about power relationships much more generally. She said she felt like shouting “I want to talk about gender-sensitive tax regimes not about periods (well, not always but I reserve the right to also be allowed to talk about that too but at my choosing!)!”

So what about the development industry as an employer – to what extent does it support, promote and empower women? In my career, I have encountered the odd sexist person, but have generally found that the people and organisations I work with promote gender equity more than seems to be the case in many other industries. Actually, in some cases I have assumed that someone has a problem with women but have later discovered they are just downright rude – in a gender non-specific manner. My other development friends reported a range of experiences on this – some, like me, had not found sexism too much a problem but others had encountered it frequently.

But whether or not people are overtly sexist, there are structural issues in the industry which may disproportionately impede women. I will give a couple of examples.

Tusker

The point at which the conversation turns to competitive development stories is definitely the time to LEAVE the bar…

Development people are preoccupied with the level of experience that people have ‘in the field’. This is completely justified – I have witnessed the problems you can get when people who have no clue about life in developing countries are managing development programmes, particularly if they don’t have insight into their own lack of knowledge. However, this principle is sometimes used as an excuse for a slightly macho bragging culture and, at times, downright rudeness towards those who are perceived as being less experienced. Unfortunately, people who are more introverted, younger and/or female seem to be disproportionately targeted in this way. And thus I have seen some very unedifying meetings in which a collection of people who have happened to be mainly male have acted in a very discourteous and disrespectful way towards a collection of people who have happened to be mainly female. I suspect these people are not sexist per se – but the combination of their assumptions and their bad manners can still result in gender discrimination. In fact, I suspect that such attitudes probably have a disproportionate effect on other groups as well – including those who come from poorer backgrounds who perhaps don’t have the same level of self-assurance that a lifetime of relative privilege brings with it.

Another issue is that certain groups may genuinely be less able to gather ‘field experience’ – but may have much to offer. Parents with caring responsibilities may not be able to travel overseas at short notice – and, although this is slowly changing, this currently disproportionately affects mothers. Once again, women are not the only group disadvantaged in this way; individuals with physical disabilities may not be able to travel to all locations while people with mental illnesses may struggle with the emotional impact of overseas travel.

None of these issues are insurmountable – but it is important to at least recognise that the industry is set up to favour cis-gender, straight, able-bodied, white, middle/upper class men. By starting with this knowledge, it is possible to consider what actions we can take to improve opportunities for a variety of groups – including women.

.

*Of course I don’t really burn bras since EU regulations have made them all flame-retardant. They spoil all our fun.

** To be honest, I kind of expect you are too. My view on feminism is summed up by comedian Azis Ansari when he says:

‘If you believe that men and women have equal rights, and then someone asks you if you’re a feminist, you have to say yes. Because that’s how words work. You can’t be like, “Yeah, I’m a doctor who primarily does diseases of the skin.” “Oh, so you’re a dermatologist?” “Oh that’s way too aggressive of a word, not at all, not at all.”’


6 Comments

Guest post: Louise Shaxson on advising governments… and ugly babies

I have known Louise Shaxson for many years and have always valued her advice and insight. However, when she wrote to me recently to tell me that she had written a blog about how to talk to new parents about their ugly babies… I was seriously concerned that we might be heading for a fall-out. Turns out I had no need to worry. For a start, the article is actually about giving advice to governments (although I think it is relevant to people giving advice to any organisation). But also, on reflection, I remembered that MY baby is totally ADORABLE. So it’s all good.

Right then, here’s the blog – and I couldn’t resist adding some pics. Hope you like it!

Being nice about an ugly baby… three tips for presenting research to governments

Presenting research results to government can be like talking to a new parent whose baby isn’t, perhaps, the best looking on the planet, (read on to find out why).

Even if a government department has commissioned your research, it can be hard to write a final report that is well received and acted on.  I’ve heard numerous researchers say that their report was politely received and then put on a shelf. Or, that it was badly received because it exposed some home truths.

A long time ago, I submitted the first draft of a report that the client didn’t like. He told me it was too confrontational. But he recognised the importance of the message and spent time explaining how to change its presentation to make the message more helpful.

I was grateful for this guidance and redrafted the report.  Consequently, it was not just well received; it helped instigate a series of changes over the next two years and was widely referenced in policy documents.

It’s not easy—I still don’t always get it right—but here are my three tips for crafting your research report, so that it is more likely to be read and used:

  1. Be gentle – government departments are sensitive to criticism.

All parents are proud of their baby, even if he doesn’t look like HRH Prince George and no parent wants to be told that their baby is ugly in public.  You can still appreciate chubby cheeks, a button nose or a wicked grin.

The media is constantly on the lookout for policy ‘failures’ – both real and perceived.  Even if there’s no intention to publish, things can leak.  If the media picks up your research and the coverage is unflattering, your client will have to defend your findings to senior managers, maybe even to the Minister, and spend a considerable amount of effort devising a communication strategy in response. 

Begin by recognising what they have achieved, so that you can put what they haven’t yet achieved into context.

  1. Observations might work better than recommendations.
tired mum

Don’t comment on how badly the baby’s dressed without recognising how difficult it was for an exhausted mother just to get her and the baby out of the house.

No matter how much subject knowledge you have, you don’t fully understand the department’s internal workings, processes and pressures.  Your client will probably be well aware of major blunders that have been made and won’t thank you for pointing them out yet again.
Framing recommendations as observations and constructive critiques will give your client something to work with.

  1. Explain why, not just what should be done differently.
messy baby

If you are telling a parent that their baby’s dressing could be improved, they may have to sell the idea to other family members – even if they themselves agree with you. Make their life easier by explaining why the suggested new approach will work better.

Your client will have to ‘sell’ your conclusions to his/her colleagues.  No matter how valid your criticisms, it’s difficult for them to tell people they’re doing it wrong.

Try not to say that something should be done differently without explaining why.  It allows your clients to work out for themselves how to incorporate your findings.

Taking a hypothetical situation in the agriculture sector, here are some examples of how to put these tips into practice:

More likely to cause problems More likely to be well received
Recommendation 1: If the agricultural division requires relevant evidence, it needs to clearly define what ‘relevant’ means in the agricultural context before collecting the evidence.

Implication: you haven’t really got a clue what sort of evidence you want.

Observation 1: Improving our understanding of what constitutes ‘relevant evidence’ means clarifying and communicating the strategic goals of the agricultural division and showing how the evidence will help achieve them.

Implication: there are some weaknesses in specific areas, but here are some things you can do about it. Using ‘our understanding’ rather than ‘the division’ is less confrontational

Recommendation 2: Relationships with the livestock division have been poor. More should be done to ensure that the objectives of the two divisions are aligned so the collection of evidence can be more efficient.

Implication: you haven’t sorted out the fundamentals.  ‘Should’ is used in quite a threatening way here.

 

 

Observation 2: Better alignment between the objectives of the agricultural and livestock divisions will help identify where the costs of collecting evidence could be shared and the size of the resulting savings.  The current exercise to refresh the agricultural strategy provides an opportunity to begin this process.

Implication: we understand that your real problem is to keep costs down.  Here is a concrete opportunity to address the issue (the strategy) and a way of doing it (aligning objectives). Everyone knows the relationship is poor, you don’t need to rub it in.

Recommendation 3: The division has a poor understanding of what is contained in the agricultural evidence base.

Recommendation 4: More work needs to be done to set the strategic direction of the agricultural evidence base.

Implication: wow, you really don’t have a clue about what evidence you’ve got or why you need it. 

Observation 3: An up to date understanding of what is contained in the agricultural evidence base will strengthen the type of strategic analysis outlined in this report.

Implication: having current records of what is in the evidence base would have improved the analysis we have done in this report (i.e. not just that it’s poor, but why is it poor?). Recommendation 4 is captured in the rewritten Observation 1.

 

This guest post is written by Louise Shaxson, a Research Fellow from the Research and Policy in Development (RAPID) programme at ODI.


2 Comments

Impact via infiltration

Two blogs ago I linked to an article which I attributed to “the ever-sensible Michael Clements”. Shortly afterwards @m_clem tweeted:

.@kirstyevidence compares @JustinSandefur to @jtimberlake 🔥—but me, I’m “sensible”.Some guys got it, some don’t. https://t.co/KphJ0ITmr8

I mention this in part because it made me chortle but mainly because it prompted me to look back at that Justin Sandefur interview. And on re-reading it, I was really struck by one of Sandefur’s models for how research relates to policy:

My third model of what research has to do with development policymaking is borderline cynical: let’s call it a vetting model.  The result of your narrow little research project rarely provides the answer to any actual policy question.  But research builds expertise, and the peer review publication process establishes the credibility of independent scientific experts in a given field.  And that — rather than specific research results — is often what policymakers are looking for, in development and elsewhere.  Someone who knows what they’re talking about, and is well versed in the literature, and whose credentials are beyond dispute, who can come in and provide expert advice.

Since that interview was published, I wrote a literature review for DFID which looked at the impact of research on development. And, having spent months of my life scouring the literature, I am more convinced than ever that the Sandefur/Timberlake effect (as it will henceforth be known) is one of the main ways in which investment in research leads to change.

This pathway can be seen clearly in the careers of successful researchers who become policy makers/advisors. For example, within DFID, the chief scientist and chief economist are respected researchers. But the significant impacts they have had on policy decisions within DFID must surely rival the impacts on society they have had via their academic outputs?

And the case may be even stronger if you also examine ‘failed scientists’ – like, for example, me! The UK Medical Research Council invested considerable amounts of funding to support my PhD studies and post-doc career. And I would summarise the societal impact of my research days as… pretty much zilch. I mean, my PhD research was never even published and my post-doc research was on a topic which was niche even within the field of protozoan parasite immunology.

Undercover nerds: creating societal impact all around you?

Undercover nerds – surely the societal impact of all those current and former academics goes beyond their narrow research findings?

In other words, I wouldn’t have to be very influential within development to achieve more impact than I did in my academic career. My successful campaign while working at the Wellcome Trust to get the canteen to stock diet Irn Bru probably surpasses my scientific contributions to society! But more seriously, I do think that the knowledge of research approaches, the discipline of empirical thinking, the familiarity with academic culture – and, on occasion, the credibility of being a ‘Dr’ – have really helped me in my career. Therefore, any positive – or indeed negative – impact that I have had can partly be attributed to my scientific training.

Of course, just looking at isolated individual researchers won’t tell us whether, overall, investment in research leads to positive societal impact – and if so, whether the “S/T effect” (I’m pretty sure this is going to catch on so I have shortened it for ease) is the major route through which that impact is achieved. Someone needs to do some research on this if we are going to figure out if it really is a/the major way in which research impacts policy/practice.

But it’s interesting to note that other people have a similar hypothesis: Bastow, Tinkler and Dunleavy carried out a major analysis of the impact of social science in the UK* and their method for calculating the benefit to society of social science investments was to estimate the amount that society pays to employ individuals with post-grad social science degrees.** In other words they assumed that the major worth of all that investment in social science was not in its findings but in the generation of experts. I think the fact that the authors are experimenting with new methodologies to explain the value of research that go beyond the outdated linear model is fabulous.

But wait, you may be wondering, does any of this matter? Well yes, I think it does because a lot of time and energy are being put into the quest to measure the societal impact of research. And in many cases the impact is narrowly defined as the direct effect of research findings and/or research derived technologies. The recent REF impact case studies did capture more diverse impacts including some that could be classified within the S/T™ effect. But I still get the impression that such indirect effects are seen as secondary and unimportant. The holy grail for research impact still seems to be linear, direct, instrumental impact on policy/practice/the economy – despite the fact that:

  1. This rarely happens
  2. Even when we think this is happening, there is a good chance that evidence is in fact just being used symbolically
  3. Incentivising academics to achieve direct impact with their research results can have unintended and dangerous results

Focussing attention on the indirect impact of trained researchers, not as an unimportant by-product but as a major route by which research can impact society, is surely an important way to get a more accurate understanding of the benefits (or lack thereof) of research funding.***

So, in summary, I think we can conclude that that Justin Sandefur is quite a sensible bloke.

And, by the way, have any of you noticed how much Michael Clemens resembles George Clooney?

.

* I have linked to their open access paper on the study but I also recommend their very readable book which covers it in more detail along with loads of other interesting research – and some fab infographics.

** Just to be pedantic, I wonder if their methodology needs to be tweaked slightly – they have measured value as the cost of employing social science post-grad degree holders but surely those graduates have some residual worth beyond their research training? I would think that the real benefit would need to be measured as the excess that society was willing to pay for a social science post-grad degree holder compared to someone without..?

*** Incidentally, this is also my major reason for supporting research capacity building in the south – I think it is unrealistic to expect that building research capacity is going to yield returns via creation of new knowledge/technology – at least in the short term. But I do think that society benefits from having highly trained scientific thinkers who are able to adapt and use research knowledge and have influence on policy either by serving as policy makers themselves or by exerting evidence-informed influence.


5 Comments

Race for impact in the age of austerity

I have recently been pondering what the age of austerity means for the development community. One consequence which seems inevitable is increasing scrutiny of how development funds are spent. The principle behind this is hard to argue with; money is limited and it seems both sensible and ethical to make sure that we do as much good as possible with what we have. However, the way in which costs and benefits are assessed could have a big impact on the future development landscape. Already, some organisations are taking the value for money principle to its logical conclusion and trying to assess and rank causes in terms of their ‘bang for your buck’. The Open Philanthropy project has been comparing interventions as diverse as cash transfers, lobbying for criminal justice reform and pandemic prevention, and trying to assess which offers the best investment for philanthropists (fascinating article on this here).

The Copenhagen Consensus project* is trying to do a similar thing for the sustainable development goals; using a mixture of cost-benefit analysis and expert opinion, they are attempting to quantify how much social, economic and environmental return development agencies can get by focussing on different goals. For example, they find that investing a dollar in universal access to contraception will result in an average of $120 of benefit. By contrast, they estimate that investing a dollar in vaccinating against cervical cancer will produce only $3 average return. Looking over the list of interventions and the corresponding estimated returns on investment is fascinating and slightly shocking. A number of high profile development priorities appear to give very low returns while some of the biggest returns correspond to interventions such as trade liberalisation and increased migration which are typically seen as outside the remit of development agencies (good discussion on ‘beyond-aid agenda’ to be found from Owen Barder et al. at CDG e.g. here).

In general, I find the approach of these organisations both brave and important. Of course there needs to be a lot of discussion and scrutiny of the methods before these figures are used to inform policy – for example, I had a brief look at the CC analysis of higher education and found a number of things to quibble with, and I am sure that others would find the same if they examined the analysis of their area of expertise. But the fact that the analysis is difficult does not mean one should not attempt it. I don’t think it is good enough that we continue to invest in interventions just because they are the pet causes of development workers. We owe it both to the tax payers who fund development work and to those living in poverty to do our best to ensure funds are used wisely.

Achieving measurable impacts without doing anything to address root causes

Achieving measurable impacts without doing anything to address root causes

Having said all that, my one note of caution is that there is a danger that these utilitarian approaches inadvertently skew priorities towards what is measurable at the expense of what is most important. Impacts which are most easily measured are often those achieved by solving immediate problems (excellent and nuanced discussion of this from Chris Blattman here). To subvert a well-known saying, it is relatively easy to measure the impact of giving a man a fish, more difficult to measure the impact of teaching a man to fish** and almost impossible to measure, let alone predict in advance, the impact of supporting the local ministry of agriculture to develop its internal capacity to devise and implement policies to support long-term sustainable fishing practices. Analysts in both the Copenhagen Consensus and the Open Philanthropy projects have clearly thought long and hard about this tension and seem to be making good strides towards grappling with it. However, I do worry that the trend within understaffed and highly scrutinised development agencies may be less nuanced.

So what is the solution? Well, firstly development agencies need to balance easy to measure but low impact interventions with tricky to measure but potentially high impact ones. BUT this does not mean that we should give carte blanche to those working on tricky systemic problems to use whatever shoddy approaches they fancy; too many poor development programmes have hidden behind the excuse that it is too complicated to assess them. Just because measuring and attributing impact is difficult does not mean that we can’t do anything to systemstically assess intermediate outcomes and use these to tailor interventions.

To take the example of organisational capacity building – which surely makes up a large chunk of these ‘tricky’ to measure programmes – we need to get serious about understanding what aspects of design and implementation lead to success. We need to investigate the effects of different incentives used in such projects including the thorny issue of per diems/salary supplements (seriously, why is nobody doing good research on this issue??). We need to find out what types of pedagogical approach actually work when it comes to supporting learning and then get rid of all the rubbish training that blights the sector. And we need to think seriously about the extent of local institutional buy-in required for programmes to have a chance of success – and stop naively diving into projects in the hope that the local support will come along later.

In summary, ever-increasing scrutiny of how development funds are spent is probably inevitable. However, if, rather than fearing it, we engage constructively with the discussions, we can ensure that important but tricky objectives continue to be pursued – but also that our approach to achieving them gets better.

* Edit: thanks to tribalstrategies for pointing out that Bjorn Lomborg who runs the Copenhagen Consensus has some controversial views on climate science. This underscores the need for findings from such organisations to be independently and rigorously peer reviewed.

**High five to anyone who now has an Arrested Development song on loop in their head.


5 Comments

Summer reading

I am currently making the most of my maternity leave by swanning around Europe in a campervan for 6 months. I have been thinking about a couple of new blogs which I will try to publish shortly – but luckily in the meantime, lots of other people are saying sensible things so I don’t have to. Here is a selection of things that have caught my eye in recent months….

I have been loving the output of Aidleap. They write about all sorts of development stuff including some great reflections on the evidence like this one on the poor state of development programme monitoring. I was also pleased to see this critique of the development sector’s obsession with innovation – and the hilarious accompanying tweet: “Donors talk about #Innovation like boys talk about sex – they’re incredibly excited about it but don’t know what they’re talking about”. It’s so true and it drives me crazy; who cares if something is innovative? We should care whether it works!

This is a picture of a mountain. It doesn't have anything to do with this post but I think you will agree that it is rather nice.

This is a picture of a mountain. It doesn’t have anything to do with this post but I think you will agree that it is rather nice.

Anyone in UK academia will be familiar with/traumatised by (delete as appropriate) REF impact case studies. I have blogged a lot in the past about the difficulties and potential dangers of assessing the impact of individual research projects and thus I loved this blog discussing why the REF case studies may not be a good reflection of policy impact of research.

This blog from the ever-sensible Michael Clemens tries to inject some objective evidence into the highly-charged discussions on migration. This guide to evaluation from the ODI RAPID gang does a great job of presenting a potentially difficult topic clearly. This blog from INASP about raising the profile of southern research is refreshingly practical.

And if you still have time for more, the Guardian’s new long-read section is fabulous and slightly addictive.


6 Comments

Does public (mis)understanding of science actually matter?

Many parents feel reassured by pseudoscientific treatments - although experts point out that a similar amount of reassurance could be achieved by investing the cost of treatments in wine and chocolate.

Babies suffer through a lot of bogus treatments for the sake of placebo-induced parental reassurance.

So, as regular readers know, I have recently become a mum. As I mentioned in my last post*, I was really shocked by how much pseudoscience is targeted at pregnant women. But four months after the birth, I have to tell you that it is not getting any better. What I find most concerning is just how mainstream the use of proven-not-to-work remedies are. Major supermarkets and chemists stock homeopathic teething powders; it is common to see babies wearing amber necklaces to combat teething; and I can’t seem to attend a mother and baby group without being told about the benefits of baby cranial osteopathy.

I find this preponderance of magical thinking kind of upsetting. I keep wondering why on earth we don’t teach the basics of research methodologies in high schools. But then sometimes I question whether my attitude is just yet another example of parents being judgey. I mean, other than the fact that people are wasting their money on useless treatments, does it really matter that people don’t understand research evidence? Is worrying about scientific illiteracy similar to Scottish people getting annoyed at English people who cross their hands at the beginning, rather than during the second verse, of Auld Lang Syne: i.e. technically correct but ultimately unimportant and a bit pedantic?

I guess that I have always had the hypothesis that it does matter; that if people are unable to understand the evidence behind medical interventions for annoying but self-limiting afflictions, they will also find it difficult to make evidence-informed decisions about other aspects of their lives. And crucially, they will not demand that policy makers back up their assertions about problems and potential solutions with facts.

But I have to admit that this is just my hypothesis.

So, my question to you is, what do you think? And furthermore, what are the facts? Is there any research evidence which has looked at the links between public ‘science/evidence literacy’ and decision making?? I’d be interested in your thoughts in the comments below. 

.

* Apologies by the way for the long stretch without posts – I’ve been kind of busy. I am happy to report though that I have been using my time to develop many new skills and can now, for example, give virtuoso performances of both ‘Twinkle Twinkle’ and ‘You cannae shove yer Granny’.†,‡

† For those of you unfamiliar with it, ‘You cannae shove yer Granny (aff a bus)’ is a popular children’s song in Scotland. No really. I think the fact that parents feel this is an important life lesson to pass on to their children tells you a lot about my country of birth…

‡ Incidentally, I notice that I have only been on maternity leave for 4 months and I have already resorted to nested footnotes in order to capture my chaotic thought processes. This does not bode well for my eventual reintegration into the world of work.


12 Comments

The arrival of mini-evidence!

pregnantReaders, I have to confess that I have been keeping a secret from you; for the last nine months, in my spare time, I have been growing a human!

I didn’t want to mention it before because I was feeling superstitious (yes, yes, I get the irony) – but I am now happy to announce the safe arrival of my son last week.

OLYMPUS DIGITAL CAMERANow, the perceptive amongst you will recognise that this post is just a poorly-disguised excuse for a proud new mum to show off a picture of her offspring (see right).

However, in an attempt to shoe-horn my news into the theme of my blog, I hereby present five things that pregnancy and childbirth have taught me about evidence-informed decision making:

1. Pregnancy is open-season for pseudo-science. I have been amazed at how otherwise sensible sources of information seem to be completely happy to promote dodgy quackery when talking about pregnancy. It is difficult to find a book or article about pregnancy problems which doesn’t eventually advocate trying homeopathy, reiki or some other daft treatment plan, while the pronouncements about what you can and cannot do while pregnant are often arbitrary and non-fact based. This article in the Guardian on this topic is great.

2. The best pseudo-scientific ‘fact’ I heard was the idea that foot massage could be dangerous during pregnancy since there is apparently an accupressure point on your foot that can induce early labour – which made me imagine a horde of reflexologists-gone-bad moonlighting as backstreet, alternative therapy abortionists.

3. The ubiquity of bad science in pregnancy-related advice is particularly disappointing considering the rich history of good research on pregnancy and childbirth. In fact, the Cochrane Collaboration was originally started in the 1980s in order to produce objective reviews of research in perinatal medicine.

4. Just as in policy making, lived experience can trump statistics. This is demonstrated by the number of mums who will assume that your experience will be the same as theirs despite the massive variation in ‘normal’ pregnancy and childbirth.

5. Those of you who work in the field of evidence-informed policy making may think you know a lot about the competing influences of evidence, beliefs, politics, prejudice, vested interests and so on in decision making. But you have not seen anything until you have spent some time browsing a mums’ online discussion forum…

The only thing that remains for me to say is that my blog posts might be a bit infrequent over the coming months – please bear with me as I might be a bit preoccupied. And, what’s that I hear you say? You would like to see another photo? Oh OK then, here you go!
OLYMPUS DIGITAL CAMERA


3 Comments

Beneficiary feedback: necessary but not sufficient?

One of the things I love about working in DFID is that people take the issue of beneficiary* feedback very seriously. Of course we don’t get it right all the time. But I like to think that the kind of externally designed, top-down, patronising solutions that are such a feature of the worst kind of development interventions (one word: BandAid**) are much less likely to be supported by the likes of DFID these days.

In fact, beneficiary feedback is so central to how we do our work that criticising it in any way can been seen as controversial; some may see it as tantamount to saying you hate poor people! So just to be clear, I think we can all agree that getting feedback from the people you are trying to help is a good thing. But we do need to be careful not to oversell what it can tell us. Here are a couple of notes of caution:

1. Beneficiary feedback may not be sufficient to identify a solution to a problem

problem cakeIt is of course vital to work with potential beneficiaries when designing an intervention to ensure that it actually meets their needs. However, it is worth remembering that what people tell you they need may not match what they will actually benefit from. Think about your own experience – are you always the best placed person to identify the solution to your problems? Of course not – because we don’t know what we don’t know. It is for that reason that you consult with others – friends, doctors, tax advisors etc. to help you navigate your trickiest problems.

I have come across this problem frequently in my work with policy making institutions (from the north and the south) that are trying to make better use of research evidence. Staff often come up with ‘solutions’ which I know from (bitter) experience will never work. For example, I often hear policy making organisations  identify that what they need is a new interactive knowledge-sharing platform – and I have also watched on multiple occasions as such a platform has been set up and has completely flopped because nobody used it.

2. Beneficiary feedback on its own won’t tell you if an intervention has worked

Evaluation methodologies – and in particular experimental and quasi-experimental approaches – have been developed specifically because just asking someone if an intervention has worked is a particularly inaccurate way to judge its effectiveness! Human beings are prone to a whole host of biases – check out this wikipedia entry for more biases than you ever realised existed. Of course, beneficiary feedback can and should form part of an evaluation but you need to be careful about how it is gathered – asking a few people who happen to be available and willing to speak to you is probably not going to give you a particularly accurate overview of user experience. The issue of relying on poorly sampled beneficiary feedback was at the centre of some robust criticisms of the Independent Commission for Aid Impact’s recent review of anti-corruption interventions – see Charles Kenny’s excellent blog on the matter here.

If you are trying to incorporate beneficiary feedback into a rigorous evaluation, a few questions to ask are: Have you used a credible sampling framework to select those you get feedback from? If not, there is a very high chance that you have got a biased sample – like it or not, the type of person who will end up being easily accessible to you as a researcher will tend to be an ‘elite’ in some way. Have you compared responses in your test group with responses from a group which represents a counterfactual situation? If not, you are at high risk of just capturing social desirability bias (i.e. the desire of those interviewed to please the interviewer). If gathering feedback using a translator, are you confident that the translator is accurately translating both what you are asking and the answers you get back? There are plenty of examples of translators who, in a misguided effort to help researchers, put their own ‘spin’ on the questions and/or answers.

Even once you have used a rigorous methodology to collect your beneficiary feedback, it may not be enough to tell the whole story. Getting feedback from people will only ever tell you about their perception of success. In many cases, you will also need to measure some more objective outcome to find out if an intervention has really worked. For example, it is common for people to conclude their capacity building intervention has worked because people report an increase in confidence or skills. But people’s perception of their skills may have little correlation with more objective tests of skill level. Similarly, those implementing behaviour change interventions may want to check if there has been a change in perceptions – but they can only really be deemed successful if an actual change in objectively measured behaviour is observed.

.

I guess the conclusion to all this is that of course it is important to work with the people you are trying to help both to identify solutions and to evaluate their success. But we also need to make sure that we don’t fetishise beneficiary feedback and as a result ignore the other important tools we have for making evidence-informed decisions.

.

* I am aware that ‘beneficiary’ is a problematic term for some people. Actually I also don’t love it – it does conjure up a rather paternalistic view of development. However, given that it is so widely used, I am going to stick with it for this blog. Please forgive me.

** I refuse to provide linklove to Bandaid but instead suggest you check out this fabulous Ebola-awareness song featured on the equally fabulous Africaresponds website.