Many donors who fund development research seek to measure the policy influence that a given research project (or a group of projects) have had. This is important and valid and the approach has yielded some interesting lessons (see for example here and here). However, it is important to realise that this is not the same as measuring evidence-informed policy.
To illustrate this, lets imagine a story. A large agricultural company carries out some research that demonstrates that a new strain of genetically modified maize increases yield by 50%. In order to disseminate this research the company organises an all-expenses paid trip to one of their farms in a neighbouring country for a group of MPs who sit on the agricultural select committee. Following this trip, the MPs recommend that the government changes its policy to allow this new strain.
In this case, you can definitely say that the research (or at least the method of disseminating it) has had policy influence. However would you say that policy is evidence-informed? Potentially not. Evidence-informed policy is that which has considered and evaluated a range of evidence. So for example in this case, you would expect the MPs to also have looked at research on the environmental, economic and social impacts. If they have not considered this range of evidence then the policy is not evidence-informed – at least not by my definition.
This may be an extreme case but I think it illustrates the point that finding out that a given piece of research has had impact does not necessarily mean that policy is evidence-informed. It could potentially just mean that those communicating the research have lobbied more effectively than others. If the research was good and the resulting policy change is good for poor people, this can seem like a positive outcome. But the danger is that the policy makers will be equally swayed by the next lobby group who comes along and argues their point effectively – and that group may not have such altruistic motives!
What those of us who support evidence-informed policy would prefer to see is that policy makers (and those who advise them) systematically evaluate the evidence base. We would prefer them to be swayed by the quality of the evidence rather than the charm of the communicator! Communication is important but we must not ‘dumb down’ the role that policy makers can and should play.
Of course these two approaches can go hand in hand. Measuring policy impact of research is an important way for donors to evaluate whether the research they are funding is policy-relevant and communicated effectively. But if we are serious about promoting evidence-informed policy, we need to also look at whether policy makers, and the institutions in which they work, have the capacity and incentives to routinely consider a range of research evidence when they are making policy decisions.
PS Also check out this blog on a similar topic