Ok, I concede… They do exist. I know that I have previously suggested that they are fictional but last week, at a meeting of development practitioners and policy makers, I met a real live randomista who believed that randomised controlled trials were the best form of evidence in all cases and that there were no reasons why you should not always do them!
So – they exist and they are a bit bonkers! But what was even more striking to me about that particular meeting was how out of proportion people’s fear of RCTs seems to be compared to the very small number of people who think like my new randomista friend.
In fact, I am starting to get the impression that being anti-RCT is a bit of a badge of honour for those in the development field. A number of times recently, I have heard quite influential development wonks come out with statements about RCTs which would be comical if they weren’t so dangerous. To be clear, while RCTs are no silver bullet, they have generated knowledge which has saved millions of lives. So why on earth does the development sector hate them so much??
It seems to me that it’s fueled by genuine fear that ignorant technocrats are going to shut down important development interventions simply because they do not produce rapid, measurable outcomes. This is a legitimate fear – but RCTs are not the culprit. RCTs are simply a tool not a political ideology.
This fear has generated a number of myths about RCTs which continue to circulate around in blogs and conference talks and don’t seem to die, no matter how many times they are shown to be false. Given their tenacity, I am fairly sure that any attempt I make to disprove them will make little difference – but it’s been a while since I have had a blog-based argument about experimental methodologies, so I think I will give it another go…
So, here are a few myths about RCTs…
MYTH 1: RCTs are useful for measuring things which are easy to count or measure objectively. This is why they are useful in medicine. But they are not useful for measuring ‘softer’ things like changes in attitudes/behaviour/perceptions, particularly when there might be a power imbalance between the investigator and the subject.
This is just not true. Many things which RCTs have been used to measure in medicine are perception-based. For example, there is not an objective biochemical marker for how bad a headache is but RCTs can be used to check if the perception of the headache is improved more when you are given an actual medicine compared to when you are given a placebo. The fact that improvement in headaches is subjective – and responsive to the power dynamics at play between doctors and patients – is precisely why any response to a pill needs to be compared with the response to a placebo so that you can get a measure of what the ‘real’ effect is. Changes in perception are particularly affected by the placebo effect, the desire of participants to please the investigator and a host of other biases which is why RCTs are particularly useful in these cases.
MYTH 2: RCTs=quantitative All other research=qualitative.
This is a common misunderstanding – in fact there is a great deal of quantitative research which is not experimental. RCTs are the most well-known variety of experimental approaches – all this means is that they set up a randomly assigned control group and compare the response in the group which get their actual treatment to the response in the control group. You can also get quasi-experimental approaches – this simply means is that there is a control group but it is not randomly assigned. Any other designs are called observational. These do include qualitative approaches but they also include quantitative research – for example econometric analysis.
MYTH 3: People who support evidence-informed policy believe that RCTs are the gold standard in research.
NOT TRUE (well ok, it may be true for my new randomista friend but it is certainly not a common belief in the circles I move in!). It is true that if you want to find out IF an intervention works, the most rigorous way to find out is to use an experimental approach. This does not mean that you always need to use an experimental approach – sometimes it would be absurd (hat tip to
@knezovjb for that link) since there is no other plausable reason for the outcome and sometimes it is not practical. Observational research approaches are equally important but they are used to find the answers to other questions. For example: How does a particular intervention work or indeed why does it not? What is the distribution of a certain condition in a population? What is the economic and political situation of a given environment? etc etc. Observational approaches, such as before/after comparisons, are not the best way to check if something works simply because humans are so susceptible to bias – you may well find lots of people report that the intervention has benefitted them when there is actually no real affect of the intervention beyond placebo effects/desire to please the investigators.
MYTH 4: Development donors only want to fund RCTs.
I often hear people back up their belief in myth 3 by saying that it is clear that donors mainly believe in RCTs based on the fact that they invest so much money in them. This is just not true! I work at DFID and can say with certainty that the majority of research it funds does NOT use experimental approaches. All the data on what is funded by DFID and many other donors is freely available so if you don’t believe me, look it up (at some point when I get the time, I would like to take on a summer student to do a project looking at the data…). Similarly if you look at the evidence which is used in DFID business cases (again all freely available online) the majority is NOT experimental evidence. It is true that there are some bodies which are set up to fund experimental approaches but just as the fact that the Wellcome Trust only funds medical research does not mean that it thinks that agricultural research is less important, the existence of funders of experimental approaches does not in itself mean that there is a grand conspiracy to not fund other research. A variation on this myth is when people have had funding requests for observational research turned down by a development funder with the feedback that the approach lacked rigour. This is sometimes interpreted as meaning that the donors only like experimental approach – but this is not true. We desperately need good observational research – but the key word is good. That means being explicit about your methodology, surfacing and discussing potential biases, exploring alternative potential theories of change, considering if what you are measuring really allows you to answer the question you set out etc etc. See here for some great work on improving the rigour of qualitative approaches to impact assessment.
MYTH 5: RCTs are invariably and uniquely unethical.
It has been suggested that RCTs are unethical since they require that the control group is not given the ‘treatment’ intervention (or at least not at the same time as the treatment group). I think this argument is fairly weak since, whenever an intervention is rolled out, there will be people who get it and those who don’t. It has also been argued that it is unethical to expect participants who are not receiving an intervention to give up their time to contribute to someone’s research when they are not getting any direct benefit in return. I do think this is a valid point that needs to be explored – but this problem is in no way unique to RCTs. In fact, most observational research methods rely on people contributing ‘data’ without getting any direct benefit. Any project that is gathering information from vulnerable projects needs to consider these issues carefully and build an appropriate public engagement strategy.
So, I do think it is really important to have discussions on the value of different types of evidence in different contexts and in fact I am pretty much in agreement with a lot of the underlying concerns that the anti-RCT lobby have: I do get worried that a push to demonstrate results can lead donors to focus more on ‘technological fixes’ to problems instead of doing the ‘softer’ research to understand contexts and explore the reasons why the many existing ‘fixes’ have not achieved the impacts we might have hoped for. But I get frustrated that the debate on this subject tends to become overly polarised and is often based more on rhetoric than facts. I strongly agree with this blog which suggests that we should try to understand each other a bit better and have a more constructive discussion on this topic.