kirstyevidence

Musings on research, international development and other stuff

Capacity building rule 4: measure whether anything has been learnt

12 Comments

I suspect that one reason that bad capacity building programmes have persisted for so long is that monitoring and evaluation of capacity building has been so poor. It is commonplace for capacity building programmes to be ‘assessed’ almost entirely on the basis of subjective measurements of how much people have enjoyed the experience or how much they think they have learnt. Of course it is lovely that people enjoy themselves – but surely we should be trying a bit harder to find out if people have actually learnt anything.

There are some exceptions where more rigorous approaches have been used and they illustrate just how vital it is that we get a bit more objective in our assessments.

A multi-million pound science communication capacity building programme (which I won’t name!) had an independedent evaluation which compared outputs produced by participants before and after they took part in the scheme. The assessment found NO significant difference in the quality of outputs. A bit of a depressing finding.

A train the trainers workshop I ran used a diagnostic test before and after the course to test knowledge of basic principals of pedagogy. The test did reveal a significant increase in scores – although it was notable that a full third of participants continued to get the wrong answers even after the intensive course. But more worryingly, observations of teaching practices carried out in the months following the course revealed that many participants had reverted to their old, bad teaching habits. This certainly taught me of the importance of follow-up mentoring and within-workplace support for learning.

In both the above examples, participants themselves rated the capacity building programmes as excellent – further illustrating that people’s subjective view of the experience may differ significantly from a more objective assessment of what has been learnt.

I strongly believe that if we implemented better monitoring and evaluation of capacity building programmes, it would be quite depressing to start with because it would prove that lots of the stuff we are doing is not working. But it would provide a mighty big incentive for all of us to up our game and start adapting capacity building programmes so they could make a real difference.

So that’s it, those are my four simple rules. What do others think? Would you add other rules? Or do you think I am being to harsh on capacity building programmes, and they are actually generally better than I have implied? Thoughts welcomed!

Want to read the full series of 4 blogs? Start with this one here.

 

Advertisements

12 thoughts on “Capacity building rule 4: measure whether anything has been learnt

  1. I think you’re right in assumptions, Kirsty. I’ve worked in the field and have often conflated positive feedback for increased capacity, usually for the purposes of retaining funding. While I’ve always used pre/post tests to monitor outcomes, I tend to be more focused on just keeping busy participants from leaving the program prematurely.

    That said, I do feel enjoyment is critical for capacity-building, at least in the sense of participant engagement. The first way to engage is obviously to create really useful curriculum. But the best curriculum can be undermined by bad presentation. If people aren’t engaged, they won’t come back. But you’re right that it doesn’t tell the whole story.

    As for M&E, I think NGOs are afraid to publish honest (objective) self-assessments of failed projects not only because how depressing it (definitely) would be but because donors may choose to withdraw their funds. I’d rather be allowed to fail sometimes if it meant I could pour money into R&D. We’d learn a lot more if we weren’t afraid to be more transparent with our M&E.

    Just to add my own opinion, I’ve noticed that one-time trainings with occasional follow-ups don’t work for most people. Capacity-building must be installed in the routine processes of local institutions to keep it working long-term.

  2. Hello Kirsty, I broadly agree with all your points and suggestions. Where I am not so sure is on the selection of participants. I think it’s very important to have a strong selection process. However, in the cases where a training course is being offered for the first time, being ruthless won’t necessarily work, anything that is offered in the “market” has to be tested and appreciated before it can get popular and you can start being more selective – I am not saying you should take all the applications you receive, that would actually hinder your programme and its popularity in the end because they will probably not find it useful because they might not have the right profile of expected participant; but I think that some water has to go under the bridge before you can get to the point where you can be selective and ruthless.
    On the M&E, I couldn’t agree more! Unfortunately sometimes the type of M&E that you suggest doesn’t fall into the project time-frame, therefore there are no resources and no interest once the project is finished to follow up on what has happened to the trained participants, but I definitely think we should make an effort and find a way to really track learning outcomes.
    Thanks a lot for sharing all these useful thoughts.
    Best
    Clara

  3. I have extremely limited experience of this type of thing but everything you say makes sense and reflects what I’ve seen. I think we have a lot of capacity (heh…) in this day and age to keep in touch with people after a program and support/follow up but getting it happening seems to be another thing.

  4. Thanks very much for opening this up for discussion Kirsty. We’ve been thinking a lot lately at INASP about the different approaches between our 2 major programmes – SRKS and VakaYiko. In SRKS, if all goes to plan, we will support training for between 1500-2000 academic researchers, journal editors, HE librarians, and IT staff in 22 countries this year alone. About 25% of this training will be delivered online and 10% as a result of ’embedding’ courses in local curricula (this % will increase as we do more embedding). In VakaYiko, we will support a much smaller number in 2 countries much more intensely, including rigorously working to follow all 4 ‘rules’ (although we prefer to think of them as principles),with the addition of co-creation of the course materials with local policy making experts. We have started to refer to this in-house as ‘gold standard’ capacity development.
    In SRKS , though we aim to follow all 4 principles, it’s a massive challenge when you are reaching such large numbers of people across so many subject areas and countries, and there does need to be some flexibility and trade-offs made between them. Instead of co-creating content in SRKS, as far as possible we ask international experts to write the courses, and then have local experts peer-review the content. Similarly we are in the midst of developing a ‘trainers network’ so that we can further support our local trainers with pedagogical skills in learner-centred training by providing them with ongoing skills development, but we can only start in two countries – many of our other trainers have received pedagogy training but some have not and we are aware that this is a potential issue. Yet expertise in the subject matter is also really important. We could be faulted for being too ambitious – but one of INASPs great strengths is our huge network of people across varied countries, and our ability to facilitate dialogue and sharing of experiences and learning amongst individuals and organisations working in similar but slightly different contexts, at different stages of development. For instance, we currently have a Ghanaian library expert supporting Sierra Leonean librarians, a Sri Lankan research fellow developing a research writing course for social scientists, and are commissioning a regional IT expert to support Nepalese journal access.
    So which is better – the gold standard approach or the ‘scaled-up’ approach? Your blog argues the former, but I note an absence of the term ‘value for money’ which the SRKS approach most definitely is. The more principles you follow, the more staff intensive (and therefore expensive) capacity development is likely to be, and the more that local ownership has the potential to be compromised. One of the issues we are grappling with is how to bring in as many ‘gold standard’ principles as possible into SRKS without losing the massive reach of the programme. Of those 1500+ people trained, not all will have their capacity ‘built’, but many will. However much effort we put into the training, however, the deciding factor will be the individuals’ motivation, willingness and excitement about learning and using that learning to make changes in their environment (or to teach others). The other deciding factor is whether the motivated individual is supported by an enabling institutional environment, which allows them to use their skills and newly acquired expertise. Rules suggest a rigidity which context does not always allow. In our experience capacity development is hard, precisely because of trying to find the balance between all of the competing ‘best practice’ principles.
    There isn’t space here to talk about other forms of capacity development beyond workshops . But we must make a distinction between training for skills to that of capacity building around concepts and ideas – changing behaviours is slower, harder and more difficult to test with a diagnostic.
    Julie

  5. Thank you Kirsty for the insightful thoughts on capacity building. It must also be noted that despite a strong selection process for CB participants, capacity building should not be viewed as a end to an process but a means to the end. The learning process also evolves around trying to change attitudes, perceptions, practices which might take some time. The biggest challenge is on creating interest on the subject matter which might contribute to change in behaviour and practice.

  6. Also, expect a (disappointingly?) high level of “failure” of impact of much of the content – is that fair? What I mean, is if you run some capacity development training that has, say, 10 key learning points, do not expect realistic success to be participants demonstrating capacity in all 10 areas. My experience is that is overly ambitious and leads to disappointment for everyone concerned. A smaller number of demonstrable areas of impact from the suite of those that are possible might be better.

    Think about the best training you have ever been on? Did you learn everything that was covered? Or did you actually only take away a couple of changes in behaviour that have really had an impact in the long run? I know that is true for me. Perhaps it is true for many capacity development programmes?

  7. Thanks Kirsty for a really thought provoking set of recommendations for good capacity building. I tend to agree with Ian Thornton at UKCDS (first commenter on your first post) in that it’s not so much that we don’t know what to do but that we can’t do it for political or logistical reasons. For example a lot of capacity building is provided through financial aid, which means it is (a) under the control of the recipient Government and (b) part of large, sometimes joint donor programmes where individual development agencies can’t maintain a high level of quality assurance because the sheer number and scale of activities is too great compared to the amount of donor staff time. Are you effectively suggesting that we should only do capacity building directly ourselves, where we have a very high level of control over the product? This would probably mean concentrating on fewer and smaller projects, with higher admin costs – which goes against current wisdom on ‘getting more bang for your buck’. Or could you see circumstances in which large scale capacity building, potentially through financial aid, might work?

    • Thanks Justin – this is a really good point. In fact quite a number of comments are making the valid point that the rules are great in theory but can be challenging – even impossible – to implement in the real world.

      On your question re. devolving control over capacity building to local governments I guess the only way to answer this would be to do really good, objective evaluations to check whether anything is actually being learnt in these programmes. My hunch is that in quite a lot of them the actual learning is pretty limited. I stick to my original point that capacity building doesn’t need to be difficult but that doesn’t remove the fact that many people are doing it wrong. In the case of government run programmes, I can’t help thinking that the capacity to support effective learning within these governments may not exist – indeed if the capacity to support learning did exist, then they wouldn’t be in such dire need of capacity building programmes!

      In terms of ‘bang for your buck’ if the ‘bang’ you want (!) is actual learning, then concentrating on fewer well implemented programmes might well turn out to be most cost effective than supporting big capacity building programmes whose main impact is to line a whole lot of civil servants’ pockets with per diems!

      Perhaps this is an overly pessimistic view though and only by properly testing what is being learnt in these programmes could we really say… What do you think?

      • Thanks Kirsty – I tend to agree with you (and have some personal experience of unsuccessful programming in this area!). It’s just that the conclusion that we need to be designing fewer programmes with (much) more hands-on control runs against at least two current development orthodoxies: (1) that it is better value for money to have a smaller number of larger programmes; and (2) that national ownership of development programmes is key.

        I don’t really have a problem with trashing orthodoxy (1) as I would rather know for sure that we are having a small impact than have a vague sense that we might possibly be having a large impact. (Although if we want to go this way, we need to increase resources for M&E significantly from current levels… and I am mainly talking about people not money). But I do have a bit more of a problem with orthodoxy (2) – because imposing all the sensible restrictions e.g. on who can attend, pedagogy, follow up etc may lead the project towards being more ‘donor dominated’ and dampen enthusiasm on the other side.

        Perhaps a fifth rule could be to help try to explain the reasons/ pedagogy behind the four other rules as we go along, so that in the longer term developing country governments are in a better position to build their own capacity. This won’t be easy as there are strong political and bureaucratic incentives in favour of the per diem culture, and resourcing this kind of high end design and evaluation of learning is expensive even for us, never mind developing countries. But it’s worth a try.

  8. Thanks very much for sharing your thoughts and intersting experiences Kirsty.
    I do not disagree with any of your rules, but I would disagree with your initial premise. Although capactity building is not ‘hard’ in the sense quantum mechanics is ‘hard’, it is complex to implement, and its also long-term, when many things in our fast-paced world are not (trying to maintain the capability and resources to use rapidly changing technology is a challenge for everyone) . Its complex because there is a diverse range of social complexities to be navigated and managed in order for individual and organisational learning to be effective, and a host of things that need to be aligned and integrated for these to successfully add up to improved organisational capacity.

    This complexity makes is challenging (rather than hard perhaps). We probably know how to do capacity building in an ideal world, but we don’t live in one. The myriad of factors that change between contexts (political economy, existing capacity, needs (perceived differently by differ people), cultural and social norms) and the ever changing approaches and demands of donors means that each time is far harder than it should be (and far less successful than we would like it to be). I can think of examples where DFID does not always make rule 1&2 easy to abide by (and I say that as someone who works at DFID).

    These rules also probably apply more to individual learning rather than CB at an organisational or environmental/systems level– but then the term ‘capacity building’ covers such a multitude of things and its difficult to talk about all three levels at once as they are all quite different.

    If you think about some of the frustrations we have with our own organisations (most definitely including DFID)– I would suggest the capacity building we need is not ‘easy’ to deliver! However, I agree that there are ‘simple’ rules that we have learnt and that are not nearly widely enough applied. We should at least try our best to apply what we know as best we can.

    Thanks,
    Kate

  9. Pingback: Capacity building rule 3: do stuff to facilitate learning | kirstyevidence

  10. Kirsty – you make a good point about the effect of environment on learning – in this case, the importance of growing up in a questioning environment for developing critical thinking skills. As I look back on how my own skills have grown and my behaviour changed over a thirty year career in scientific publishing, I note that formal training was far less significant than a general consensus on the definition of success (a definition worked out by a number of people over a long period of time), inspiring role models, a supportive manager, winning on some fronts, losing on others and working out the difference between the two.

    Two academics have recently talked to me convincingly about the subtle networks and connections that reinforce acquired knowledge when doing academic research – the field in which INASP does a lot of capacity-building. For example the repeated support of an academic supervisor in understanding in which journals and at which conferences crucial knowledge in a subject is communicated, or the knowledge of where to submit an article that comes from someone down the hall who has refereed for a journal and can give the inside account of the editorial process. And then I imagine what it would be like as a UK researcher if most of those journals, conferences and their collaborators were, say, in Malawi. It would be pretty mystifying, I think.

    Jon Harle’s comment pulls out the importance of time and timing and the need to provide longer, slower processes of mentoring and interactions.

    So, what can we do to reinforce the small, temporary interventions that we are often offering as a substitute for an organic, multi-dimensional experience that has developed for most of us in the north over the course of generations?

    This is where I think online learning holds some promise. Because courses can more often be taken at the time most useful for the learner (likely to be close to the point of actually having to do something real and important in their working life). And they can be constructed to maximise the opportunity to interact with peers in getting answers to questions. And there can be post-training communities of practice which allow participants to get reminders, exactly when they need them, of the points they have forgotten. I saw a beautiful, simple example of just that in an INASP Facebook group for recently trained journal editors (actually, a face to face workshop). There had been some bonding of the group face to face. There were lots of pictures posted on the group page, creating a sense of continued comradeship. And in that context, it was so easy for a participant to post “I am just trying to load an issue online, can you remind me how…?” And for the link to the answer to be swiftly supplied. Just in time. Happy trainee, happy trainer.

Leave a Reply (go on, you know you want to...)

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s