kirstyevidence

Musings on research, international development and other stuff


6 Comments

Guest post: Louise Shaxson on advising governments… and ugly babies

I have known Louise Shaxson for many years and have always valued her advice and insight. However, when she wrote to me recently to tell me that she had written a blog about how to talk to new parents about their ugly babies… I was seriously concerned that we might be heading for a fall-out. Turns out I had no need to worry. For a start, the article is actually about giving advice to governments (although I think it is relevant to people giving advice to any organisation). But also, on reflection, I remembered that MY baby is totally ADORABLE. So it’s all good.

Right then, here’s the blog – and I couldn’t resist adding some pics. Hope you like it!

Being nice about an ugly baby… three tips for presenting research to governments

Presenting research results to government can be like talking to a new parent whose baby isn’t, perhaps, the best looking on the planet, (read on to find out why).

Even if a government department has commissioned your research, it can be hard to write a final report that is well received and acted on.  I’ve heard numerous researchers say that their report was politely received and then put on a shelf. Or, that it was badly received because it exposed some home truths.

A long time ago, I submitted the first draft of a report that the client didn’t like. He told me it was too confrontational. But he recognised the importance of the message and spent time explaining how to change its presentation to make the message more helpful.

I was grateful for this guidance and redrafted the report.  Consequently, it was not just well received; it helped instigate a series of changes over the next two years and was widely referenced in policy documents.

It’s not easy—I still don’t always get it right—but here are my three tips for crafting your research report, so that it is more likely to be read and used:

  1. Be gentle – government departments are sensitive to criticism.

All parents are proud of their baby, even if he doesn’t look like HRH Prince George and no parent wants to be told that their baby is ugly in public.  You can still appreciate chubby cheeks, a button nose or a wicked grin.

The media is constantly on the lookout for policy ‘failures’ – both real and perceived.  Even if there’s no intention to publish, things can leak.  If the media picks up your research and the coverage is unflattering, your client will have to defend your findings to senior managers, maybe even to the Minister, and spend a considerable amount of effort devising a communication strategy in response. 

Begin by recognising what they have achieved, so that you can put what they haven’t yet achieved into context.

  1. Observations might work better than recommendations.
tired mum

Don’t comment on how badly the baby’s dressed without recognising how difficult it was for an exhausted mother just to get her and the baby out of the house.

No matter how much subject knowledge you have, you don’t fully understand the department’s internal workings, processes and pressures.  Your client will probably be well aware of major blunders that have been made and won’t thank you for pointing them out yet again.
Framing recommendations as observations and constructive critiques will give your client something to work with.

  1. Explain why, not just what should be done differently.
messy baby

If you are telling a parent that their baby’s dressing could be improved, they may have to sell the idea to other family members – even if they themselves agree with you. Make their life easier by explaining why the suggested new approach will work better.

Your client will have to ‘sell’ your conclusions to his/her colleagues.  No matter how valid your criticisms, it’s difficult for them to tell people they’re doing it wrong.

Try not to say that something should be done differently without explaining why.  It allows your clients to work out for themselves how to incorporate your findings.

Taking a hypothetical situation in the agriculture sector, here are some examples of how to put these tips into practice:

More likely to cause problems More likely to be well received
Recommendation 1: If the agricultural division requires relevant evidence, it needs to clearly define what ‘relevant’ means in the agricultural context before collecting the evidence.

Implication: you haven’t really got a clue what sort of evidence you want.

Observation 1: Improving our understanding of what constitutes ‘relevant evidence’ means clarifying and communicating the strategic goals of the agricultural division and showing how the evidence will help achieve them.

Implication: there are some weaknesses in specific areas, but here are some things you can do about it. Using ‘our understanding’ rather than ‘the division’ is less confrontational

Recommendation 2: Relationships with the livestock division have been poor. More should be done to ensure that the objectives of the two divisions are aligned so the collection of evidence can be more efficient.

Implication: you haven’t sorted out the fundamentals.  ‘Should’ is used in quite a threatening way here.

 

 

Observation 2: Better alignment between the objectives of the agricultural and livestock divisions will help identify where the costs of collecting evidence could be shared and the size of the resulting savings.  The current exercise to refresh the agricultural strategy provides an opportunity to begin this process.

Implication: we understand that your real problem is to keep costs down.  Here is a concrete opportunity to address the issue (the strategy) and a way of doing it (aligning objectives). Everyone knows the relationship is poor, you don’t need to rub it in.

Recommendation 3: The division has a poor understanding of what is contained in the agricultural evidence base.

Recommendation 4: More work needs to be done to set the strategic direction of the agricultural evidence base.

Implication: wow, you really don’t have a clue about what evidence you’ve got or why you need it. 

Observation 3: An up to date understanding of what is contained in the agricultural evidence base will strengthen the type of strategic analysis outlined in this report.

Implication: having current records of what is in the evidence base would have improved the analysis we have done in this report (i.e. not just that it’s poor, but why is it poor?). Recommendation 4 is captured in the rewritten Observation 1.

 

This guest post is written by Louise Shaxson, a Research Fellow from the Research and Policy in Development (RAPID) programme at ODI.

Advertisements


12 Comments

Capacity building rule 4: measure whether anything has been learnt

I suspect that one reason that bad capacity building programmes have persisted for so long is that monitoring and evaluation of capacity building has been so poor. It is commonplace for capacity building programmes to be ‘assessed’ almost entirely on the basis of subjective measurements of how much people have enjoyed the experience or how much they think they have learnt. Of course it is lovely that people enjoy themselves – but surely we should be trying a bit harder to find out if people have actually learnt anything.

There are some exceptions where more rigorous approaches have been used and they illustrate just how vital it is that we get a bit more objective in our assessments.

A multi-million pound science communication capacity building programme (which I won’t name!) had an independedent evaluation which compared outputs produced by participants before and after they took part in the scheme. The assessment found NO significant difference in the quality of outputs. A bit of a depressing finding.

A train the trainers workshop I ran used a diagnostic test before and after the course to test knowledge of basic principals of pedagogy. The test did reveal a significant increase in scores – although it was notable that a full third of participants continued to get the wrong answers even after the intensive course. But more worryingly, observations of teaching practices carried out in the months following the course revealed that many participants had reverted to their old, bad teaching habits. This certainly taught me of the importance of follow-up mentoring and within-workplace support for learning.

In both the above examples, participants themselves rated the capacity building programmes as excellent – further illustrating that people’s subjective view of the experience may differ significantly from a more objective assessment of what has been learnt.

I strongly believe that if we implemented better monitoring and evaluation of capacity building programmes, it would be quite depressing to start with because it would prove that lots of the stuff we are doing is not working. But it would provide a mighty big incentive for all of us to up our game and start adapting capacity building programmes so they could make a real difference.

So that’s it, those are my four simple rules. What do others think? Would you add other rules? Or do you think I am being to harsh on capacity building programmes, and they are actually generally better than I have implied? Thoughts welcomed!

Want to read the full series of 4 blogs? Start with this one here.

 


2 Comments

Capacity building rule 3: do stuff to facilitate learning

This rule may sounds so obvious that it is not even worth stating. But it is amazing how many projects which are labeled as capacity building don’t seem to contain any plans to actually support the building of capacity, i.e. learning.

One common mistake is to think that giving funding to an organisation in the south is ‘capacity building’, as if the money will somehow lead to learning through a process of osmosis. There are plenty more ‘capacity building’ schemes which contain activities supposedly to support learning which are so badly designed and implemented that they are very unlikely to achieve their aims. I have sat through a fair number of ‘capacity building’ workshops that were so deathly boring that the only thing I have learnt is how to pass the time until the next tea break.

The sad thing is that there is actually a lot of good knowledge on how people learn and those who run capacity building could benefit massively from understanding it. I am not talking about the pseudoscientific stuff like the practice of teaching according to learning styles – but the more serious study of pedagogy that has demonstrated what practices really support learning – and which ones should be discarded and at an organisational level, there is lots of good learning on how to support organisational development. It is extremely arrogant of us to assume that just because we know about a given topic that we know how to support others to learn about it.

The point is that you don’t need to start from scratch when designing capacity building – get speaking to people who know and go to some courses in pedagogy/training skills/organisational development and your capacity building programme will be dramatically improved.

Go to rule 4 here… or start with the first post in the series here.

 


2 Comments

Capacity building rule 2: be ruthless in your selection

Rosipaw, flikr

Rosipaw, flikr

As I mentioned in the previous post, you can never ‘build someone else’s capacity’. All you can do as an outsider is to support the learning of others. Therefore it is good to be humble about what you can achieve. You are unlikely to facilitate a miracle transformation so it is usually best not to attempt this! If you want to support someone to be able to do something, the best chance you have is to find those who are almost there and just need a little extra support.

One of the best individual capacity building programmes I know of is highly sucessful in a large part because it has incredibly tough entry requirements. The scheme, run by the International Union Against Lung Disearse and Tuberculosis, selects highly qualified medical practitioners to receive training in Operational Research. Participants need to go through a rigorous selection process and then they need to commit to an intensive year-long training schedule. A key feature is that they need to demonstrate not only that they are qualified to take part but also that they have the personal commitment. They only graduate from the scheme once they have completed all the key milestones which include submission of an original research article to a peer-reviewed journal. As a result of this process, the scheme achieves remarkable sucess rates with almost 80% of participants managing to get a peer-reviewed publication. By comparison, I know of other academic writing courses which have never managed to support a single participant to the stage of getting a publication.

The ruthless selection rule applies equally if you are working with an organisation. You need to ask yourself whether an increase in capacity/learning will be sufficient for the organisation in question to become self-sustaining. In other words, is there a demand for the services the organisation offers which they are just unable to capitalise on due to low capacity? In such cases, there could be a good reason to get involved. But if the organisation is failing because there is a fundamental lack of demand/market/funding for that type of organisation, you need to question whether your capacity building programme will really lead to long-term change. To find out if the organisation is likely to be sustainable, you need to make sure you speak not only to those who would benefit from an increase in the organisation’s capacity, but also to those who would determine whether it becomes sustainable in the long term.

The ruthless selection rule sounds harsh and elitist. And in some ways it is harsh and elitist. However, it is also effective since it enables people to target the relatively small amount of support that an outsider can provide to those individuals and organisations who actually have the potential to benefit from it.

Go to rule 3 here… or see previous post here.


19 Comments

Capacity building – why so difficult?

A frequent comment about capacity building is that it is very difficult and complicated. I understand this to a point – I mean any endeavour that involves human beings is going to be complicated. But it is possible to fail at something for a long time even if that thing turns out to be easy once you know how! And I wonder whether when we say capacity building is difficult, what we really mean is that we have so far failed to do it well.

Personally, I suspect that capacity building is not as difficult as it has been made out to be. In fact, in the coming posts I am going to propose four simple rules for capacity building and I hypothesise that if implementers followed these rules their sucess rate would be dramatically higher.

The first rule gets to the heart of what we actually mean by capacity building. It is, after all, a bit of a funny term that we use in the development field but generally not in our real lives. For me, capacity building means learning. Individual learning, organisational learning or even societal learning. Thinking about it in this sense highlights an important feature of capacity building – it has to be owned by the ‘beneficiary’. No-one can make another person learn and therefore no-one can ‘build someone elses capacity’*. As outsiders, all we can do is to support the learning/capacity building of others.

So, rule number 1 is that those who are benefitting from the capacity building programme need to have ownership of their learning. This doesn’t mean that outside agencies can’t implement capacity building programmes – but it does mean that they will need to make very sure, at an early stage that those who are intended to benefit from the work are actually fully bought-in and committed.

A good example of this comes from the organisation I used to work for, INASP. They have been working for many years with consortia of academic librarians, researchers and ICT experts in a number of developing countries. They support these consortia to build their capacity to support access, availability and use of research information. In some cases, the experts in INASP might think they know what the best thing for a given country consortium to do is. However, while they may provide some advice, they realise that change will only really happen if the consortium itself comes up with and implements its own solution.

Funnily enough, you can learn a lot about this approach by watching trashy television shows like Mary Portas Queen of Shops or Gordon Ramsey’s Kitchen Nightmare. In these shows, the main job of the presenter is not to tell people what they need to do but to guide them to the point where they recognise for themselves what is needed and then get on and do it!

So that was rule number 1 – the next 3 will be coming up over the coming days. If you want to get each post direct to your inbox you can sign up on the right to receive email updates. I look forward to hearing your thoughts, objections and additions!

Go to rule 2.

                                                                                                                                                                 

*The only time I do think it is acceptable to talk about building someone else’s capacity is if you are indulging in the niche sport of ‘dirty development talk’ (see below). This concept was introduced to me by two friends who are now happily married – proof, methinks, of its efficacy.

'capacity building'

 


3 Comments

Can an outsider help solve your problems?

My sister, who knows about these things, tells me that most great innovations happen when someone from one sector/area of expertise moves to a new sector/area of expertise and introduces a new way of dealing with a problem.

Face-palm moment

Face-palm moment

This kind of surprises me – my experience is that when new people arrive in my sector, they quite often make lots of the same mistakes that those of us who have been around for a while have long ago tried and discarded. But my sister’s revelation made me wonder whether this slightly negative attitude towards newbies is doing me harm? Is my snootiness depriving me of lots of valuable opportunities to learn?

The answer is probably yes, but I think ‘outsider’ input into problem solving does need to be well managed. It is possible that someone with a new perspective will identify a fabulous and innovative new way to solve a problem – but there is also a high risk that they will jump to the same naive assumptions that you used to have before you became so jaded I mean… experienced.

So here are my top tips to both sides of equation – and, as usual, my advice is gathered from my own experience of messing this type of thing up!

If you are the highly experienced expert who is getting some ‘outsider’ perspective….

1. Stop being so bloomin’ grumpy! Yes of course you know lots about this and of course the outsider will appear ignorant – but if you can attempt to engage with them enthusiastically – even gratefully – and provide evidence for why certain ideas might not work (rather than rolling your eyes!) you might well get a useful new perspective.

2. Build your credibility as an expert by summarising important bodies of knowedge that you have learnt from – including your own experiences, books, experts, research evidence etc. This will be more helpful and more persuasive that just expecting people to realise that you know best (even if you do!).

3. Don’t be afraid to pinpoint parts of the problem which you already feel well-placed to solve – and other parts where you would welcome some input.

If you are the bright-eyed bushy tailed outsider who has been brought in to advise…

1. Make sure it is clear that you want to listen – this usually reduces people’s resistance. And try to spend as much time understanding what the problem is that people are trying to solve before jumping in with solutions. I find the ‘Action Learning’ approach really useful for forcing you to stop trying to solve a problem before you actually really understand it.

2. Be respectful to people’s knowledge and experience and take the time to listen to how they think the problem should be solved (even if they do seem grumpy!). You may eventually decided to provide constructive challenge to their proposed solutions, but this will never be effective unless you really understand why they are proposing them.

3. Repeatedly  invite the experts to challenge any new ideas you have – and develop a thick skin!

.

And, just in case none of this works, you may also want to check out this post on dealing with disagreements…!


5 Comments

The enthalpy of aid

I have decided to continue my theme of exceedingly geeky analogies by today comparing international development to a chemical reaction. I don’t know if you remember when you learnt chemistry at school enthalpybut one of the few things I can remember is drawing these little ‘enthalpy graphs’ that show how reactions proceed. The graph shown here could for example represent setting fire to a very flamable thing – you need to put a bit of energy in at the beginning, but then the overall you get more energy out than you put in. The little bit of energy you put in at the beginning is called the activation energy.

I think that the very best international development projects are similar to this activation energy. They are projects which inject a little bit of additional funds, know-how, coordination or whatever to people and organisations who already have huge potential to make a difference. This is why I really like the new advertising campaign from Oxfam America – it highlights how relatively small amounts of investment can be enough to support people to really make a difference. It makes a welcome change to the dominant narrative in international development fundraising campaigns – think b-list celebrity rowing down a remote African river, giving out bed-nets to the poor (but happy) Africans. The Oxfam campaign, in contrast, highlights people living and working in developing countries who are working day in day out to make a difference in their community – and it demonstrates that the contribution that external actors can play can be important but is relatively minor.

I think many capacity building programmes would do well to consider the ‘activation energy’ phenomena too. The fact is that capacity buildng programmes – whether they involve mentoring, training, organisational development or whatever – generally only make a marginal change. As an example, lets say that capacity building will lead to a 5% increase in ‘capacity’ to do something (yes I know you can’t measure capacity as a percentage but bear with me…!). You can see that if you offer that programme to people who have 30% existing capacity, it will increase their capacity but they still won’t really be able to do much. But if you are able to work with people who already have 95% capacity, your programme could be just the little bit extra input that is needed to make a real difference.

There are some ethical concerns about this approach – some would say that it is unfair to offer support to those who are already doing relatively well. I think this is a fair comment but I also think that we need to be really honest about how much a given project can ever achieve. If it can only ever offer relatively minor increases in effectiveness, our only choice is to provide this to those who could make use of that to make a real difference. If we aim to support those who are far less able to improve their own situation and those of others then we may need to rethink our project.

Of course this approach completely relies on being able to identify those who have the existing potential – the people and organisations who are at 95%. And unsurprisingly, this is easier said and done! But maybe that is another blog post…