kirstyevidence

Musings on research, international development and other stuff


2 Comments

Impact via infiltration

Two blogs ago I linked to an article which I attributed to “the ever-sensible Michael Clements”. Shortly afterwards @m_clem tweeted:

.@kirstyevidence compares @JustinSandefur to @jtimberlake 🔥—but me, I’m “sensible”.Some guys got it, some don’t. https://t.co/KphJ0ITmr8

I mention this in part because it made me chortle but mainly because it prompted me to look back at that Justin Sandefur interview. And on re-reading it, I was really struck by one of Sandefur’s models for how research relates to policy:

My third model of what research has to do with development policymaking is borderline cynical: let’s call it a vetting model.  The result of your narrow little research project rarely provides the answer to any actual policy question.  But research builds expertise, and the peer review publication process establishes the credibility of independent scientific experts in a given field.  And that — rather than specific research results — is often what policymakers are looking for, in development and elsewhere.  Someone who knows what they’re talking about, and is well versed in the literature, and whose credentials are beyond dispute, who can come in and provide expert advice.

Since that interview was published, I wrote a literature review for DFID which looked at the impact of research on development. And, having spent months of my life scouring the literature, I am more convinced than ever that the Sandefur/Timberlake effect (as it will henceforth be known) is one of the main ways in which investment in research leads to change.

This pathway can be seen clearly in the careers of successful researchers who become policy makers/advisors. For example, within DFID, the chief scientist and chief economist are respected researchers. But the significant impacts they have had on policy decisions within DFID must surely rival the impacts on society they have had via their academic outputs?

And the case may be even stronger if you also examine ‘failed scientists’ – like, for example, me! The UK Medical Research Council invested considerable amounts of funding to support my PhD studies and post-doc career. And I would summarise the societal impact of my research days as… pretty much zilch. I mean, my PhD research was never even published and my post-doc research was on a topic which was niche even within the field of protozoan parasite immunology.

Undercover nerds: creating societal impact all around you?

Undercover nerds – surely the societal impact of all those current and former academics goes beyond their narrow research findings?

In other words, I wouldn’t have to be very influential within development to achieve more impact than I did in my academic career. My successful campaign while working at the Wellcome Trust to get the canteen to stock diet Irn Bru probably surpasses my scientific contributions to society! But more seriously, I do think that the knowledge of research approaches, the discipline of empirical thinking, the familiarity with academic culture – and, on occasion, the credibility of being a ‘Dr’ – have really helped me in my career. Therefore, any positive – or indeed negative – impact that I have had can partly be attributed to my scientific training.

Of course, just looking at isolated individual researchers won’t tell us whether, overall, investment in research leads to positive societal impact – and if so, whether the “S/T effect” (I’m pretty sure this is going to catch on so I have shortened it for ease) is the major route through which that impact is achieved. Someone needs to do some research on this if we are going to figure out if it really is a/the major way in which research impacts policy/practice.

But it’s interesting to note that other people have a similar hypothesis: Bastow, Tinkler and Dunleavy carried out a major analysis of the impact of social science in the UK* and their method for calculating the benefit to society of social science investments was to estimate the amount that society pays to employ individuals with post-grad social science degrees.** In other words they assumed that the major worth of all that investment in social science was not in its findings but in the generation of experts. I think the fact that the authors are experimenting with new methodologies to explain the value of research that go beyond the outdated linear model is fabulous.

But wait, you may be wondering, does any of this matter? Well yes, I think it does because a lot of time and energy are being put into the quest to measure the societal impact of research. And in many cases the impact is narrowly defined as the direct effect of research findings and/or research derived technologies. The recent REF impact case studies did capture more diverse impacts including some that could be classified within the S/T™ effect. But I still get the impression that such indirect effects are seen as secondary and unimportant. The holy grail for research impact still seems to be linear, direct, instrumental impact on policy/practice/the economy – despite the fact that:

  1. This rarely happens
  2. Even when we think this is happening, there is a good chance that evidence is in fact just being used symbolically
  3. Incentivising academics to achieve direct impact with their research results can have unintended and dangerous results

Focussing attention on the indirect impact of trained researchers, not as an unimportant by-product but as a major route by which research can impact society, is surely an important way to get a more accurate understanding of the benefits (or lack thereof) of research funding.***

So, in summary, I think we can conclude that that Justin Sandefur is quite a sensible bloke.

And, by the way, have any of you noticed how much Michael Clemens resembles George Clooney?

.

* I have linked to their open access paper on the study but I also recommend their very readable book which covers it in more detail along with loads of other interesting research – and some fab infographics.

** Just to be pedantic, I wonder if their methodology needs to be tweaked slightly – they have measured value as the cost of employing social science post-grad degree holders but surely those graduates have some residual worth beyond their research training? I would think that the real benefit would need to be measured as the excess that society was willing to pay for a social science post-grad degree holder compared to someone without..?

*** Incidentally, this is also my major reason for supporting research capacity building in the south – I think it is unrealistic to expect that building research capacity is going to yield returns via creation of new knowledge/technology – at least in the short term. But I do think that society benefits from having highly trained scientific thinkers who are able to adapt and use research knowledge and have influence on policy either by serving as policy makers themselves or by exerting evidence-informed influence.

Advertisements


5 Comments

Race for impact in the age of austerity

I have recently been pondering what the age of austerity means for the development community. One consequence which seems inevitable is increasing scrutiny of how development funds are spent. The principle behind this is hard to argue with; money is limited and it seems both sensible and ethical to make sure that we do as much good as possible with what we have. However, the way in which costs and benefits are assessed could have a big impact on the future development landscape. Already, some organisations are taking the value for money principle to its logical conclusion and trying to assess and rank causes in terms of their ‘bang for your buck’. The Open Philanthropy project has been comparing interventions as diverse as cash transfers, lobbying for criminal justice reform and pandemic prevention, and trying to assess which offers the best investment for philanthropists (fascinating article on this here).

The Copenhagen Consensus project* is trying to do a similar thing for the sustainable development goals; using a mixture of cost-benefit analysis and expert opinion, they are attempting to quantify how much social, economic and environmental return development agencies can get by focussing on different goals. For example, they find that investing a dollar in universal access to contraception will result in an average of $120 of benefit. By contrast, they estimate that investing a dollar in vaccinating against cervical cancer will produce only $3 average return. Looking over the list of interventions and the corresponding estimated returns on investment is fascinating and slightly shocking. A number of high profile development priorities appear to give very low returns while some of the biggest returns correspond to interventions such as trade liberalisation and increased migration which are typically seen as outside the remit of development agencies (good discussion on ‘beyond-aid agenda’ to be found from Owen Barder et al. at CDG e.g. here).

In general, I find the approach of these organisations both brave and important. Of course there needs to be a lot of discussion and scrutiny of the methods before these figures are used to inform policy – for example, I had a brief look at the CC analysis of higher education and found a number of things to quibble with, and I am sure that others would find the same if they examined the analysis of their area of expertise. But the fact that the analysis is difficult does not mean one should not attempt it. I don’t think it is good enough that we continue to invest in interventions just because they are the pet causes of development workers. We owe it both to the tax payers who fund development work and to those living in poverty to do our best to ensure funds are used wisely.

Achieving measurable impacts without doing anything to address root causes

Achieving measurable impacts without doing anything to address root causes

Having said all that, my one note of caution is that there is a danger that these utilitarian approaches inadvertently skew priorities towards what is measurable at the expense of what is most important. Impacts which are most easily measured are often those achieved by solving immediate problems (excellent and nuanced discussion of this from Chris Blattman here). To subvert a well-known saying, it is relatively easy to measure the impact of giving a man a fish, more difficult to measure the impact of teaching a man to fish** and almost impossible to measure, let alone predict in advance, the impact of supporting the local ministry of agriculture to develop its internal capacity to devise and implement policies to support long-term sustainable fishing practices. Analysts in both the Copenhagen Consensus and the Open Philanthropy projects have clearly thought long and hard about this tension and seem to be making good strides towards grappling with it. However, I do worry that the trend within understaffed and highly scrutinised development agencies may be less nuanced.

So what is the solution? Well, firstly development agencies need to balance easy to measure but low impact interventions with tricky to measure but potentially high impact ones. BUT this does not mean that we should give carte blanche to those working on tricky systemic problems to use whatever shoddy approaches they fancy; too many poor development programmes have hidden behind the excuse that it is too complicated to assess them. Just because measuring and attributing impact is difficult does not mean that we can’t do anything to systemstically assess intermediate outcomes and use these to tailor interventions.

To take the example of organisational capacity building – which surely makes up a large chunk of these ‘tricky’ to measure programmes – we need to get serious about understanding what aspects of design and implementation lead to success. We need to investigate the effects of different incentives used in such projects including the thorny issue of per diems/salary supplements (seriously, why is nobody doing good research on this issue??). We need to find out what types of pedagogical approach actually work when it comes to supporting learning and then get rid of all the rubbish training that blights the sector. And we need to think seriously about the extent of local institutional buy-in required for programmes to have a chance of success – and stop naively diving into projects in the hope that the local support will come along later.

In summary, ever-increasing scrutiny of how development funds are spent is probably inevitable. However, if, rather than fearing it, we engage constructively with the discussions, we can ensure that important but tricky objectives continue to be pursued – but also that our approach to achieving them gets better.

* Edit: thanks to tribalstrategies for pointing out that Bjorn Lomborg who runs the Copenhagen Consensus has some controversial views on climate science. This underscores the need for findings from such organisations to be independently and rigorously peer reviewed.

**High five to anyone who now has an Arrested Development song on loop in their head.