Today we are moving into political nerdiness with political scientist, Anna Paterson.
1. What flavour of nerdy scientist/researcher are you?I’m a political scientist who has ended up with a focus on fragile and conflict-affected countries. I’m mostly involved in applied research so I do feel that I’m often on the more ‘quick and dirty’ end of the nerd spectrum. I’ve worked in political research for almost my whole career, first in the private sector then as a Research Analyst in the Foreign Office and then for nearly two years as a field researcher for an NGO in Afghanistan. After that I did a PhD looking at Russian approaches to, and the regional dimensions of, security and development in Afghanistan. I got to know a lot of former Soviet ‘technical experts’ who had worked in Afghanistan in the 70s and 80s and became interested in the role political ‘experts’ have played in aid, development and security then and now.
More recently I worked for DFID as part of its evidence-based agenda, first as an ‘Evidence Broker’ in Research and Evidence Division and then as an Evaluation Adviser in Nigeria.
2. What do you do now?
I’m now an independent consultant, for my sins! I wanted to get back to the actual ‘doing’ of research and evaluation. I have worked and am working on a number of projects. Some of these are evaluations of programmes, including a Human Rights grantmaking programme in Nigeria, a programme that trains African Police to participate in African peacekeeping missions and a thematic evaluation of a donor country programme in Yemen. Some are more ‘researchy’ for example I’m just starting work on a research project on conflict and security in communities on the Afghan-Tajik border.
The job involves designing the research with the client, carrying out or overseeing data collection, analysing the data and producing reports. In some projects I’m part of a big team, in others it’s just me and another researcher. Juggling timings is a big issue and what I’ve realised is that in the types of countries I’m going to, everything tends to get delayed.
3. What has research got to do with international development?
I think where political science literature on conflict and fragility is concerned a lot of the research that’s being used in development is at the macro and theoretical level. So we have a lot of really good research and lots of debates about state-building, for example on ‘political settlements’ that are really important and that policymakers and practitioners are engaged with.
What we have less of is the data and research that helps us to understand what’s going on, and what will happen when we try to intervene, on the ground in conflict-affected countries. Often programmes in conflict-affected countries are working with bad or near-absent data and without evidence of what types of interventions work in what circumstances. A lot of this is about supporting country systems for collecting data but if we’re going to intervene in conflict-affected countries we also need to do robust research at the intervention level not just in health in governance and peacebuilding interventions. In DFID I became really interested in rigorous Impact Evaluations, including Randomised Controlled Trials (RCTs), especially in my field – political science – where they are relatively new. I’m not saying that rigorous Impact Evaluations can or should be used for all intervention but I think we need more of them in conflict-affected contexts where it’s really important that we understand more about the impacts of interventions.
4. What have you been up to recently?
I’ve been on a feasibility study for a potential RCT of a community peacebuilding programme in Somalia . If it goes ahead I’ll be leading on the process evaluation and qualitative component of the evaluation, which is a really exciting role for a qualitative researcher. Good RCTs take qualitative research very seriously and are pretty good at mixing methods purposively. It’s been fascinating and challenging. There has been a lot of intense debate and we have had to take on board and deal with many very reasonable concerns. This is a real-world intervention which is similar to other types of interventions that are increasingly conducted at the community level aimed at armed violence reduction. It’s much harder to design an impact evaluation for a real world intervention that to design an intervention just for the purposes of an impact evaluation, which is the way some impact evaluations work. It requires more compromise and accommodation on the part of the implementers and the evaluators. But I would argue we need more of these types of evaluations. I know the debate about RCTs is very polarised. But if you think that we do need more of them, as I do, the steps that follow on from that are not straightforward. These evaluations are difficult to get off the ground. Some of them won’t work in the field. Those that do need to be very carefully designed and implemented especially in the context of conflict. I hope this evaluation gets off the ground and if it doesn’t I hope to have other opportunities to work on this kind of study.
5. What advice would you give to other science types who want to work in development?
I’ve always liked the kind of research that gets you out into the field. I remember someone in my department when I did my PhD telling me that ‘political scientists don’t tend to go to the field that much.’ I think political scientists should get their hands dirty – especially when they’re young (which I’m not anymore!).
6. Tell us something to make us smile
I like the feminist social scientist Ann Oakley. She reminds us that robust data and evaluations are needed by women, vulnerable and marginalised groups more than anyone precisely because of power dynamics – to arm themselves against the arrogance of those in power who ‘are so prone to launch interventions without knowing their effects.’ I also like her because she said: ‘housework is work directly opposed to the possibility of human self-actualization.’
 Oakley, Ann ‘Paradigm Wars: Some Thoughts on a Personal and Public Trajectory’ INT. J. SOCIAL RESEARCH METHODOLOGY, 1999, VOL. 2, NO. 3, 247 ± 254