kirstyevidence

Musings on research, international development and other stuff

Guest post on Pritchett Sandefur paper

1 Comment

Readers, I am delighted to introduce my first ever guest post. It is from my colleague Max – who can be found lurking on twitter as @maximegasteen – and it concerns the recent Pritchett/Sandefur paper. Enjoy! And do let us know your thoughts on the paper in the comments.

Take That Randomistas: You’re Totally Oversimplifying Things… (so f(x)=a_0+∑_(n=1)^∞(a_n  cosnπx/L+b_n  sinnπx/L)…)

Internal validity is great - but it's not everything! (Find more fab evaluation cartoons on freshspectrum.com)

The quest for internal validity can sometimes go too far…
(Find more fab evaluation cartoons on freshspectrum.com)

Development folk are always talking about “what works”. It’s usually around a research proposal saying “there are no silver bullets in this complex area” and then a few paragraphs later ending with a strong call “but we need to know what works”. It’s an attractive and intuitive rhetorical device. I mean, who could be against finding out ‘what works’? Surely no-one* wants to invest in something that doesn’t work?-.

Of course, like all rhetorical devices, “what works” is an over-simplification. But a new paper by Lant Pritchett and Justin Sandefur, Context Matters for Size, argues that this rhetorical device is not just simplistic, but actually dangerous for sensible policy making in development. The crux of the argument is that the primacy of methods for neat attribution of impact in development research and donors’ giddy-eyed enthusiasm when an RCT is dangled in front of their eyes leads to some potentially bad decisions.

Pritchett and Sandefur highlight cases where, on the basis of some very rigorous but limited evidence, influential researchers have pushed hard for the global scale-up of ‘proven’ interventions. The problem with this is that while RCTs can have very strong internal validity (i.e. they are good at demonstrating that a given factor leads to a given outcome) their external validity (i.e. the extent to which their findings can be generalised) is oftentimes open to question. Extrapolating from one very different context, often at small scale, to another context can be very misleading. They go on to use several examples from education to show that estimates using less rigorous methods, but in the local context are a better guide to the true impact of an intervention than a rigorous study from a different context.

All in all, a sensible argument. But that is kind of what bothers me. I feel like Pritchett and Sandefur have committed the opposite rhetorical sin to the “what works” brigade – making something more complicated than it needs to be. Sure, it’s helpful to counterbalance some of the (rather successful) self-promotion of the more hard-line randomistas’ favourite experiments, but I think this article swings too far in the opposite direction.

I think Pritchett and Sandefur do a slight disservice to people who support evidence-informed development (full disclosure: I am one of them) thinking they would blindly apply the results of a beautiful study from across the world in the context in which they work. At the same time (and here I will enter into the doing a disservice to the people working in development territory) I would love to be fighting my colleagues on the frontline who are trying to ignore good quality evidence from the local context in favour of excellent quality evidence from elsewhere. But in my experience I’ve faced the opposite challenge, where people designing programmes are putting more emphasis on dreadful local evidence to make incredible claims about the potential effectiveness of their programme (“we asked 25 people after the project if they thought things were better and 77.56% said it had improved by 82.3%” – the consultants masquerading as researchers who wrote this know who they are).

My bottom line on the paper? It’s a good read from some of the best thinkers on development. But it’s a bit like watching a series of The Killing – lots of detail, a healthy dose of false leads/strawmen but afterwards you’re left feeling a little bit bewildered – did I have to go through all that to find out not to trust the creepy guy who works at the removal company/MIT?

Having said that, it’s useful to always be reminded that the important question isn’t “does it work (somewhere)” but “did it work over there and would it work over here”.  I’d love to claim credit for this phrase, but sadly someone wrote a whole (very good) book about it.


*With the possible exception of Lyle Lanley who convinced everyone with a fancy song and dance routine to build a useless monorail in the Simpsons


Advertisements

One thought on “Guest post on Pritchett Sandefur paper

  1. Thanks for this post Maxand Kirsty and for pointing to the paper.

    I agree with the authors, context matters, but my take on it is that the paper fixates on a methodology (RCTs) whereas the lessons I see are actually about being able to understand the limitations of different research methods and knowing when to use different types of research evidence.

    There is a tension here – the expertise within big development organisations and governments to use research evidence and the desire of decision makers to have technical fixes. To my mind neither of these are problems with the methodology.

Leave a Reply (go on, you know you want to...)

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s