kirstyevidence

Musings on research, international development and other stuff


2 Comments

Holding decision makers to account for evidence use

Evidence-informed policy – it’s a wonderful thing. But just how widespread is it? The ‘Show your workings’ report from the Institute of Government (and collaborators Sense About Science and the Alliance for Useful Evidence) has asked this question and concluded… not very. It states “there [are] few obvious political penalties for failing to base decision on the best available evidence”. I have to say that as a civil servant this rings true. It’s not that people don’t use evidence – actually most civil servants, at least where I work, do. But there are not good systems in place to distinguish between people who have systematically looked at the full body of evidence and appraised its strengths and weaknesses – and those who have referenced a few cherry-picked studies to back up their argument.

cat up tree

Rosie is my actual cat’s name. And she does indeed make many poor life decisions. Incidentally, I named my other cat ‘Mouse’ and now that I am trying to teach my child to identify animals I am wondering just how wise a life decision that was…

The problem for those scrutinising decision making – parliament, audit bodies and, in the case of development, the Independent Commission for Aid Impact – is that if you are not a topic expert it can be quite hard to judge whether the picture of evidence presented in a policy document does represent an impartial assessment of the state of knowledge. The IoG authors realised this was a problem quite early in their quest – and came up with a rather nifty solution. Instead of trying to decide if decisions are based on an unbiased assessment of evidence, they simply looked at how transparent decision makers had been about how they had appraised evidence.

Now, on the evidence supply side there has been some great work to drive up transparency. In the medical field, Ben Goldacre is going all guns blazing after pharmaceutical companies to get them to clean up their act. In international development, registers of evaluations are appearing and healthy debates are emerging on the nature of pre-analysis plans. This is vitally important – if evaluators don’t declare what they are investigating and how, it is far too easy for them to not bother publishing findings which are inconvenient – or to try multiple types of analysis until, by chance, one gives them a more agreeable answer.

But as the report shows, and as others have argued elsewhere, there has been relatively little focus on transparency on the ‘demand’ side. And by overlooking this, I think that we might have been missing a trick. You see, it turns out that the extent to which a policy document explicitly sets out how evidence has been gathered and appraised is a rather good proxy for systematic evidence appraisal. And the IoG’s hypothesis is that if you could hold decision makers to account for their evidence transparency, you could go some way towards improving the systematic use of evidence to inform decision makers.

The report sets out a framework which can be used to assess evidence transparency. As usual, I have a couple of tweaks I would love to see. I think it would be great if the framework included more explicitly an assessment of the search strategy used to gather the initial body of evidence – and perhaps rewarded people for making use of existing rigorous synthesis products such as systematic reviews. But in general, I think it is a great tool and I really hope the IoG et al. are successful in persuading government departments – and crucially those who scrutinise them – to make use of it.

 


6 Comments

Guest post: Louise Shaxson on advising governments… and ugly babies

I have known Louise Shaxson for many years and have always valued her advice and insight. However, when she wrote to me recently to tell me that she had written a blog about how to talk to new parents about their ugly babies… I was seriously concerned that we might be heading for a fall-out. Turns out I had no need to worry. For a start, the article is actually about giving advice to governments (although I think it is relevant to people giving advice to any organisation). But also, on reflection, I remembered that MY baby is totally ADORABLE. So it’s all good.

Right then, here’s the blog – and I couldn’t resist adding some pics. Hope you like it!

Being nice about an ugly baby… three tips for presenting research to governments

Presenting research results to government can be like talking to a new parent whose baby isn’t, perhaps, the best looking on the planet, (read on to find out why).

Even if a government department has commissioned your research, it can be hard to write a final report that is well received and acted on.  I’ve heard numerous researchers say that their report was politely received and then put on a shelf. Or, that it was badly received because it exposed some home truths.

A long time ago, I submitted the first draft of a report that the client didn’t like. He told me it was too confrontational. But he recognised the importance of the message and spent time explaining how to change its presentation to make the message more helpful.

I was grateful for this guidance and redrafted the report.  Consequently, it was not just well received; it helped instigate a series of changes over the next two years and was widely referenced in policy documents.

It’s not easy—I still don’t always get it right—but here are my three tips for crafting your research report, so that it is more likely to be read and used:

  1. Be gentle – government departments are sensitive to criticism.

All parents are proud of their baby, even if he doesn’t look like HRH Prince George and no parent wants to be told that their baby is ugly in public.  You can still appreciate chubby cheeks, a button nose or a wicked grin.

The media is constantly on the lookout for policy ‘failures’ – both real and perceived.  Even if there’s no intention to publish, things can leak.  If the media picks up your research and the coverage is unflattering, your client will have to defend your findings to senior managers, maybe even to the Minister, and spend a considerable amount of effort devising a communication strategy in response. 

Begin by recognising what they have achieved, so that you can put what they haven’t yet achieved into context.

  1. Observations might work better than recommendations.
tired mum

Don’t comment on how badly the baby’s dressed without recognising how difficult it was for an exhausted mother just to get her and the baby out of the house.

No matter how much subject knowledge you have, you don’t fully understand the department’s internal workings, processes and pressures.  Your client will probably be well aware of major blunders that have been made and won’t thank you for pointing them out yet again.
Framing recommendations as observations and constructive critiques will give your client something to work with.

  1. Explain why, not just what should be done differently.
messy baby

If you are telling a parent that their baby’s dressing could be improved, they may have to sell the idea to other family members – even if they themselves agree with you. Make their life easier by explaining why the suggested new approach will work better.

Your client will have to ‘sell’ your conclusions to his/her colleagues.  No matter how valid your criticisms, it’s difficult for them to tell people they’re doing it wrong.

Try not to say that something should be done differently without explaining why.  It allows your clients to work out for themselves how to incorporate your findings.

Taking a hypothetical situation in the agriculture sector, here are some examples of how to put these tips into practice:

More likely to cause problems More likely to be well received
Recommendation 1: If the agricultural division requires relevant evidence, it needs to clearly define what ‘relevant’ means in the agricultural context before collecting the evidence.

Implication: you haven’t really got a clue what sort of evidence you want.

Observation 1: Improving our understanding of what constitutes ‘relevant evidence’ means clarifying and communicating the strategic goals of the agricultural division and showing how the evidence will help achieve them.

Implication: there are some weaknesses in specific areas, but here are some things you can do about it. Using ‘our understanding’ rather than ‘the division’ is less confrontational

Recommendation 2: Relationships with the livestock division have been poor. More should be done to ensure that the objectives of the two divisions are aligned so the collection of evidence can be more efficient.

Implication: you haven’t sorted out the fundamentals.  ‘Should’ is used in quite a threatening way here.

 

 

Observation 2: Better alignment between the objectives of the agricultural and livestock divisions will help identify where the costs of collecting evidence could be shared and the size of the resulting savings.  The current exercise to refresh the agricultural strategy provides an opportunity to begin this process.

Implication: we understand that your real problem is to keep costs down.  Here is a concrete opportunity to address the issue (the strategy) and a way of doing it (aligning objectives). Everyone knows the relationship is poor, you don’t need to rub it in.

Recommendation 3: The division has a poor understanding of what is contained in the agricultural evidence base.

Recommendation 4: More work needs to be done to set the strategic direction of the agricultural evidence base.

Implication: wow, you really don’t have a clue about what evidence you’ve got or why you need it. 

Observation 3: An up to date understanding of what is contained in the agricultural evidence base will strengthen the type of strategic analysis outlined in this report.

Implication: having current records of what is in the evidence base would have improved the analysis we have done in this report (i.e. not just that it’s poor, but why is it poor?). Recommendation 4 is captured in the rewritten Observation 1.

 

This guest post is written by Louise Shaxson, a Research Fellow from the Research and Policy in Development (RAPID) programme at ODI.


1 Comment

The politics of evidence supply and demand

I have written before about the separate functions of evidence supply and demand. To recap, supply concerns the production and communication of research findings while demand concerns the uptake and usage of evidence. While this model can be a useful way to think about the process of evidence-informed policy making, it has been criticised for being too high level and not really explaining what evidence supply and demand looks like in the real world – and in particular in developing countries.

I was therefore really pleased to see this paper from the CLEAR centre at the University of Witwatersrand which examines in some detail what supply and demand for evidence, in this case specifically evaluation evidence, looks like in five African countries.

What is particularly innovative about this study is that they compare the results of their assessments of evaluation of supply and demand with a political economy analysis and come up with some thought-provoking ideas about how to promote the evidence agenda in different contexts. In particular, they divide their five case study countries into two broad categories and suggest some generalisable rules for how evidence fits in to each.

Developmental patrimonial: the ‘benevolent dictator’

Two of the countries – Ethiopia and Rwanda – they categorise as broadly developmental patrimonial. In these countries, there is strong centralised leadership with little scope for external actors to influence. Perhaps surprisingly, in these countries there is relatively high endogenous demand for evidence; the central governments have a strong incentive to achieve developmental outcomes in order to maintain the government’s legitimacy and therefore, at least in some cases, look for evaluation evidence to inform what they do. These countries also have relatively strong technocratic ministries which may be more able to deal with evidence than those in some other countries. It is important to point out that these countries are not consistently and systematically using research evidence to inform decisions and that in general they are more comfortable with impact evaluation evidence which has clear pre-determined goals rather than evidence which questions values. But there does seem to be some existing demand and perhaps the potential for more in the future. When it comes to supply of evaluations, the picture is less positive: although there are examples of good supply, in general there is a lack of expertise in evaluations, and most evaluations are led by northern experts.

Neopatrimonial: a struggle for power and influence

The other three countries – Malawi, Zambia and Ghana – are categorised as broadly neopatrimonial. These countries are characterised by patronage-based decision making. There are multiple interest groups which are competing for influence and power largely via informal processes. Government ministries are weaker and stated policy may bear little relationship to what actually happens. Furthermore, line ministries are less influenced by Treasury and thus incentives for evidence from treasury are less likely to have an effect. However, the existance of multiple influential groups does mean that there are more diverse potential entry points for evidence to feed into policy discussions. Despite these major differences in demand for evidence, evaluation supply in these countries was remarkably similar to that in developmental patrimonial countries – i.e. some examples of good supply but in general relatively low capacity and reliance on external experts.

I have attempted to summarise the differences between these two categories of countries – as well as the commonalities – are summarised in the table below.

eval tableThere are a couple of key conclusions which I drew from this paper. Firstly, if we are interested in supporting the demand for evidence in a given country, it is vital to understand the political situation to identify entry points where there is potential to make some progress on use of evidence. The second point is that capacity to carry out evaluations remains very low despite a large number of evaluation capacity building initiatives. It will be important to understand whether existing initiatives are heading in the right direction and will produce stronger capacity to carry out evaluations in due course – or whether there is a need to rethink the approach.


2 Comments

It’s complicated… or is it?

If you’ve ever seen a talk by the a member of the Research and Policy in Development team you may well have seen their rather marvellous slide (on the right) illustrating the policy making process. It starts with a standard diagram of the cyclical policy making process (agenda setting leads to policy formulation etc etc) and then each time the speaker clicks a new arrow appears indicating the linkages between stages and various policy making actors. What’s great about it is that as the speaker continues to speak, the arrows continue to appear until the initial diagram is completely obscured by a tangled web of interactions. I think it is a perfect illustration of the potential complexity of policy making processes that provides the rationale for one of the cental tenets of the rapid approach – because policy making processes can work in so many different ways, it is vital that you understand the particular context that you are working in.

However, unfortunately, I think this potential complexity is sometimes used to justify another approach – inaction! On a number of occassions I have heard people involved in evidence-informed policy or policy influence projects assert that they cannot/will not understand the policy making context because ‘its too complicated/complex’. I think this is misssing the point. It is true that there are many different ways in which policy might be made but they don’t all exist in any one context and in fact sometimes when you look into the way in which policy is made on a given topic, in a given place, it is remarkably simple.

For example, I know the guy who more or less single handedly wrote the science and technology monitoring policy of an entire country. He was in charge of the parastatal organisation which handles science issues and so, with input from his staff and advisors, he wrote it before feeding it up to relevant minister who approved it. Similarly, I know a woman who wrote a parliamentary committe report scrutinising climate change policy in her country. She was the parliamentary researcher assigned to the committee and, since the MPs did not have the time or expertise to write such reports, she wrote it and it was later signed off with minor changes by the MPs. As an aside, in both these cases the person in question had the necessary skills to find, synthesise and use the necessary research evidence and thus the policies were evidence-informed but unfortunately this is not always the case. But anyway, these cases illustrate that some policy making processes are neither complicated nor complex.

So how do you understand the policy making process in your context? Well for starters, you need to know the basics – you have no right to complain that policy makers don’t understand the basics of research if you don’t understand the basics of policy making! Do you understand what the basic functions of a parliament are (clue: there are three)? What about the functions of government or the civil service? What is the difference between a parliamentary and a presidential systems of government (n.b. they both have parliaments so its not that!) and which system does your country have? These were some of the questions which we used in an opening ‘quiz’ at the International Conference on Evidence-Informed Policy Making and suprisingly few people got the right answers. If you are struggling too, I strongly recommend some Wikipedia browsing!

Once you’ve mastered the basics you are ready to talk to someone in the system (without risking looking stupid!). My suggestion is to find an opportunity to speak to a member of the civil service or a member of parliamentary staff – they are generally great repositories of information on how the system works – and find out who actually makes policy (and here you will need to be clear on what you mean by policy) in the area you are interested in.

Please note that none of the above is meant to criticise the great work on complexity, adaptive systems and development (see for example this excellent series of three blogs by Owen Barder). Starting to ask questions about how policy is made will just be the start of your investigative work and I am not saying it will necessarily be easy or even possible to fully understand the system. You might find it is complicated. You might even find it is complex. But the point is that there are things you can find out and getting even some information on the context will dramatically improve the success of any intervention.