A fast-track to blogging success in the development field is to pick a research approach (RCTs, econometrics, rigorous synthesis, qualitative research etc), ‘reveal’ that there are some drawbacks of said approach, and go on to conclude that research is bad (or at least highly suspect). Whenever I see such an article, it strikes me as a little akin to giving up on our judicial system on the basis that sometimes there are miscarriages of justice:
I mean, clearly, any method of gathering evidence to inform decisions has limitations. And of course the people making decisions will be informed by a whole lot of other factors. But these facts don’t make me want to give up on evidence entirely but rather inspire me to think about how we can reduce the drawbacks of research approaches and/or support people and processes so that evidence is routinely considered as one part of the decision-making process.
So, it was with happiness that I read this new paper from the ODI. The paper gives an overview of both ‘traditional’ literature reviews and systematic reviews and outlines some drawbacks of each. However, rather than taking the approach of declaring both useless, the authors go on to propose an intermediate approach which combines:
“..compliance with the broad systematic review principles (rigour, transparency, replicability) and flexibility to tailor the process towards improving the quality of the overall findings, particularly if time and budgets are constrained”.
What makes this paper particularly useful is that it sets out a clear process with 8 steps which potential authors can follow. They give plenty of detail of how each stage can be carried out and they include a wealth of useful tips (for example, I learnt about the concept of ‘forward-snowballing’ – who knew?). I think that many people find the idea of carrying out a rigorous review quite intimidating and will find this an invaluable guide. I also love the inclusion of a graphical representation of synthesised evidence – as I have mentioned before, I think we need to get more inventive at communicating bodies of evidence.
The authors don’t shy away from discussing the challenges of their proposed approach – with particular attention paid to the difficulties of assessing the ‘strength’ of evidence. I would tend to be slightly more positive about attempting some type of assessment of evidence strength – and I am not sympathetic to the argument that authors are unable to include method sections due to restrictive word count rules (have you seen how long some academic papers from the ODI are?!). Having said that, I do completely agree with the authors that this is the most challenging – and the most political – part of evidence synthesis and that there will always be a degree of subjectivity.
I did think the authors fell slightly into strawman territory when they list how their approach differs from SRs. A few of the differences do not really exist. For example, they mention that meta-analysis is not a useful way to synthesise data for many topics – which is true – but meta-analysis is by no means a necessary part of an SR. I would hazard a guess that most systematic reviews in development research do not use meta-analysis – see here and here for examples. They also imply that SRs do not include grey literature. This is definitely not true – any good SR should include a thorough search strategy which includes grey literature. See for example this guidance from the EPPI centre which states:
“In most approaches to systematic reviewing the aim is to produce a comprehensive and unbiased set of research relevant to the review question. Being comprehensive means that the search strategy attempts to uncover published and unpublished, easily accessible and harder to find reports of research studies.”
I do wonder whether these statements were true about some earlier SRs – for example, perhaps meta-analysis has been used inappropriately in the past, and I am sure that not all SRs (particularly when they were first introduced to the development research field) did a good job of capturing non-journal published material. This might explain the impressions reflected in the paper.
In any case, these are minor quibbles. Overall, I think it’s a good and useful paper, and I do hope that it will stimulate more people to think about how we can synthesise evidence in a way which is as objective as possible but is also practical.