Another week, another blog pointing out that RCTs are not the ‘gold standard’ of evidence despite the fact that NOBODY is saying they are. To be fair to the blogger, he is simply summarising a paper written by Angas Deaton – a man who is a bit of an enigma to me. I have heard him speak and been blown away by how thoughtful, insightful and challenging he is – until he comes to the topic of RCTs when he seems to become strawmantastic. Anyway, I’ve written about misconceptions about rcts so many times in the past that I am sure you are bored of hearing me – in fact I am even bored of hearing myself drone on about it. So, in lieu of another post on this matter, I present to you links to previous posts (here, here and here)… and a picture I have drawn for you of a baby panda. Enjoy.
I suspect that one reason that bad capacity building programmes have persisted for so long is that monitoring and evaluation of capacity building has been so poor. It is commonplace for capacity building programmes to be ‘assessed’ almost entirely on the basis of subjective measurements of how much people have enjoyed the experience or how much they think they have learnt. Of course it is lovely that people enjoy themselves – but surely we should be trying a bit harder to find out if people have actually learnt anything.
There are some exceptions where more rigorous approaches have been used and they illustrate just how vital it is that we get a bit more objective in our assessments.
A multi-million pound science communication capacity building programme (which I won’t name!) had an independedent evaluation which compared outputs produced by participants before and after they took part in the scheme. The assessment found NO significant difference in the quality of outputs. A bit of a depressing finding.
A train the trainers workshop I ran used a diagnostic test before and after the course to test knowledge of basic principals of pedagogy. The test did reveal a significant increase in scores – although it was notable that a full third of participants continued to get the wrong answers even after the intensive course. But more worryingly, observations of teaching practices carried out in the months following the course revealed that many participants had reverted to their old, bad teaching habits. This certainly taught me of the importance of follow-up mentoring and within-workplace support for learning.
In both the above examples, participants themselves rated the capacity building programmes as excellent – further illustrating that people’s subjective view of the experience may differ significantly from a more objective assessment of what has been learnt.
I strongly believe that if we implemented better monitoring and evaluation of capacity building programmes, it would be quite depressing to start with because it would prove that lots of the stuff we are doing is not working. But it would provide a mighty big incentive for all of us to up our game and start adapting capacity building programmes so they could make a real difference.
So that’s it, those are my four simple rules. What do others think? Would you add other rules? Or do you think I am being to harsh on capacity building programmes, and they are actually generally better than I have implied? Thoughts welcomed!
Want to read the full series of 4 blogs? Start with this one here.
This rule may sounds so obvious that it is not even worth stating. But it is amazing how many projects which are labeled as capacity building don’t seem to contain any plans to actually support the building of capacity, i.e. learning.
One common mistake is to think that giving funding to an organisation in the south is ‘capacity building’, as if the money will somehow lead to learning through a process of osmosis. There are plenty more ‘capacity building’ schemes which contain activities supposedly to support learning which are so badly designed and implemented that they are very unlikely to achieve their aims. I have sat through a fair number of ‘capacity building’ workshops that were so deathly boring that the only thing I have learnt is how to pass the time until the next tea break.
The sad thing is that there is actually a lot of good knowledge on how people learn and those who run capacity building could benefit massively from understanding it. I am not talking about the pseudoscientific stuff like the practice of teaching according to learning styles - but the more serious study of pedagogy that has demonstrated what practices really support learning – and which ones should be discarded and at an organisational level, there is lots of good learning on how to support organisational development. It is extremely arrogant of us to assume that just because we know about a given topic that we know how to support others to learn about it.
The point is that you don’t need to start from scratch when designing capacity building – get speaking to people who know and go to some courses in pedagogy/training skills/organisational development and your capacity building programme will be dramatically improved.
As I mentioned in the previous post, you can never ‘build someone else’s capacity’. All you can do as an outsider is to support the learning of others. Therefore it is good to be humble about what you can achieve. You are unlikely to facilitate a miracle transformation so it is usually best not to attempt this! If you want to support someone to be able to do something, the best chance you have is to find those who are almost there and just need a little extra support.
One of the best individual capacity building programmes I know of is highly sucessful in a large part because it has incredibly tough entry requirements. The scheme, run by the International Union Against Lung Disearse and Tuberculosis, selects highly qualified medical practitioners to receive training in Operational Research. Participants need to go through a rigorous selection process and then they need to commit to an intensive year-long training schedule. A key feature is that they need to demonstrate not only that they are qualified to take part but also that they have the personal commitment. They only graduate from the scheme once they have completed all the key milestones which include submission of an original research article to a peer-reviewed journal. As a result of this process, the scheme achieves remarkable sucess rates with almost 80% of participants managing to get a peer-reviewed publication. By comparison, I know of other academic writing courses which have never managed to support a single participant to the stage of getting a publication.
The ruthless selection rule applies equally if you are working with an organisation. You need to ask yourself whether an increase in capacity/learning will be sufficient for the organisation in question to become self-sustaining. In other words, is there a demand for the services the organisation offers which they are just unable to capitalise on due to low capacity? In such cases, there could be a good reason to get involved. But if the organisation is failing because there is a fundamental lack of demand/market/funding for that type of organisation, you need to question whether your capacity building programme will really lead to long-term change. To find out if the organisation is likely to be sustainable, you need to make sure you speak not only to those who would benefit from an increase in the organisation’s capacity, but also to those who would determine whether it becomes sustainable in the long term.
The ruthless selection rule sounds harsh and elitist. And in some ways it is harsh and elitist. However, it is also effective since it enables people to target the relatively small amount of support that an outsider can provide to those individuals and organisations who actually have the potential to benefit from it.
A frequent comment about capacity building is that it is very difficult and complicated. I understand this to a point – I mean any endeavour that involves human beings is going to be complicated. But it is possible to fail at something for a long time even if that thing turns out to be easy once you know how! And I wonder whether when we say capacity building is difficult, what we really mean is that we have so far failed to do it well.
Personally, I suspect that capacity building is not as difficult as it has been made out to be. In fact, in the coming posts I am going to propose four simple rules for capacity building and I hypothesise that if implementers followed these rules their sucess rate would be dramatically higher.
The first rule gets to the heart of what we actually mean by capacity building. It is, after all, a bit of a funny term that we use in the development field but generally not in our real lives. For me, capacity building means learning. Individual learning, organisational learning or even societal learning. Thinking about it in this sense highlights an important feature of capacity building – it has to be owned by the ‘beneficiary’. No-one can make another person learn and therefore no-one can ‘build someone elses capacity’*. As outsiders, all we can do is to support the learning/capacity building of others.
So, rule number 1 is that those who are benefitting from the capacity building programme need to have ownership of their learning. This doesn’t mean that outside agencies can’t implement capacity building programmes – but it does mean that they will need to make very sure, at an early stage that those who are intended to benefit from the work are actually fully bought-in and committed.
A good example of this comes from the organisation I used to work for, INASP. They have been working for many years with consortia of academic librarians, researchers and ICT experts in a number of developing countries. They support these consortia to build their capacity to support access, availability and use of research information. In some cases, the experts in INASP might think they know what the best thing for a given country consortium to do is. However, while they may provide some advice, they realise that change will only really happen if the consortium itself comes up with and implements its own solution.
Funnily enough, you can learn a lot about this approach by watching trashy television shows like Mary Portas Queen of Shops or Gordon Ramsey’s Kitchen Nightmare. In these shows, the main job of the presenter is not to tell people what they need to do but to guide them to the point where they recognise for themselves what is needed and then get on and do it!
So that was rule number 1 – the next 3 will be coming up over the coming days. If you want to get each post direct to your inbox you can sign up on the right to receive email updates. I look forward to hearing your thoughts, objections and additions!
Go to rule 2.
*The only time I do think it is acceptable to talk about building someone else’s capacity is if you are indulging in the niche sport of ‘dirty development talk’ (see below). This concept was introduced to me by two friends who are now happily married – proof, methinks, of its efficacy.
I have been having an amusing and distracting twitter conversation this week about how to look smart in front of the various different tribes of development specialists. Here’s a few tips to instantly up your credibility no matter who you are meeting with…
If you are meeting a social development expert, no matter what the topic, be sure to ask if they have considered it through a ‘gendered lens’.
In meetings with evaluation experts ALWAYs question the credibility of the counterfactual. If that doesn’t work, you can resort to questioning the external validity.
Make social scientists think you are one of them by dropping the word epistomology into any discussion. For example, try opening a sentence with the phrase “Epistomologically speaking,…” but be sure to practice this beforehand because if you come out with a few too many sylables all your efforts will have been wasted. “Normative” is another good social science word to throw in and is particularly useful for throwing doubt on someone’s opinion while maintaining the facade that you are just upholding objectivity i.e. “hmm… isn’t that a rather normative stance you are taking?”
People from IDS will invariable nod enthusiastically if you say “I think we need to unpack this a little further”; ODI types will be more impressed by you alluding to polical economy analysis and/or complexity theory; and those working for DFID will love you if you mention value for money in every second sentence.
And of course, everybody’s favourites: the economists – it is just too easy to tease them for their inpenetrable jargon. There are so many good economist catch phrases that it is hard to know where to start but I particularly liked @otis_read’s suggestion of “wow, interesting project, except for obvious endogeneity problem” and, from @fp2p: Look em in eye & say “I’m not convinced by your elasticities”
Have a great weekend – some slightly more serious blogs coming up next week.
Stefan Dercon and Paul Clist recently published this excellent short paper outlining 12 principles to consider before using a Payment by Results (PbR) contract for development programmes. But, as pointed out by @hmryder, it is written in quite technical language. You can’t blame the authors – I mean, they are hardcore economists who probably speak that way when they are watching the football. So I have attempted to translate the paper for fellow simple folk – economists do let me know if I have made any mistakes.
Principle 1: PbR involves paying for something after it has been delivered. Therefore it only works if the implementer has enough money in the first place to pay for the work until they are reimbursed.
Principle 2: If you are going to pay based on results, you need to be able to measure the results. If you choose a proxy indicator (i.e. not the final result you are looking for but something that has to change along the way), you need to make sure that changes in your indicator really suggest that the end result will change too.
Principle 3: Some people will game the system by finding ways to make it seem that they have achieved the results when they actually haven’t. Perhaps more worrying is that if you choose the wrong proxy indicator, it might lead people to concentrate too much on trying to achieve that without trying to achieve the actual end result you are looking for.
Principle 4: Donors shouldn’t use PbR just as a way to reduce their risk, for two reasons. Firstly, donors are actually usually much better able to handle risk than implementing partners. This is because donors tend to be funding lots of projects, so if one or two go wrong, they still know they have others that should work. Implementers, on the other hand, may only have one project so they are likely to be really risk averse. The second reason is that the implementer is already likely to be very susceptible to risk and by transferring the additional risk of potential non-payment, you will probably just make them even more risk averse.
Principle 5: If the thing that you want to achieve is essentially the same as the thing the implementer wants to achieve, PbR may not be that useful. PbR should be used to incentivise implementers to do the thing that you want them to do, and you might be wasting effort if they are already fully incentivised to do that thing anyway.
Principle 6: PbR is useful where it is difficult to measure what the implementers are doing (inputting), and therefore you need to measure what they are achieving. If you can easily measure what they are doing, just do that.
Principle 7: PbR works well when achieving the result you are looking for is actually within the control (more or less) of the implementers. It doesn’t work well when there are loads of factors outside the implementers control which will determine whether the result is achieved.
Principle 8: The biggest extra cost of PbR contracts compared to other contracts is the cost of verifying whether results (or a suitable proxy indicator of results) have been achieved.
Principle 9: There is some evidence that trying to incentivise people who are already very motivated to do something by giving them money can actually backfire – they may feel insulted that you think they need to be paid to do something when actually they want to do it because they think it is the right thing. (I wrote about this a bit here).
Principle 10: Donors need to be honest about the practical constraints they are working under and to be aware when these might get in the way of an effective PbR contract.
Principle 11: You can only judge whether your PbR contract has been sucessful by looking to see whether the end result you were aiming for has actually been achieved. Just showing that a proxy indicator has been achieved is not enough.
Principle 12: Remember that PbR is not the only tool in the box for incentivising performance.