The excellent article by Sally Cupitt – Head of NCVO Charities Evaluation Services – on Randomised Controlled Trials and their use within the voluntary and community sector provides not just an informed explanation of what they are and how they work, but also a critique of their application. I welcome any contribution that highlights the use of experimental methods – as a paid up member of the experimentalist club – particularly from someone as well informed and experienced as Sally.
My experience of Randomised Controlled Trials at Lambeth Council and elsewhere at is similar to the picture painted in the article – that RCTs are not generally understood, only used very occasionally by not for profit organisations (and indeed by local government and the public sector) though there is growing interest. The list of seven challenges that using RCTs poses to the voluntary sector is certainly very comprehensive and I am not going to argue (much) that they are not valid concerns.
Who can say that scale and timescale, technical skills, ethical issues, generalisation and the need for other evaluation methods aren’t real considerations?
But I look at that list and can’t help thinking that they could be applied to pretty much any decent and reliable evaluation method you might consider using. Of course different methods have their strengths and weaknesses and some will require more technical skill, or pose more challenges of generalisation…but they are all likely to be there with any method we might consider using. That is the nature of evaluation – there is not a single method or approach that will do everything you need all of the time (whatever some of the advocates of these approaches might tell you!).
One thing that Sally and I certainly agree on is that RCTs are not suitable for evaluating every programme or initiative. Not everything can be measured through and RCT and sometimes even if you could, you shouldn’t – the selection of evaluation methods needs to be proportionate to the scale and nature of the programme. In fact that reminds me of discussions Sally and I had about 15 years ago when she was helping me and my team to develop an approach to evaluating influencing policy-making. We realised that we could design a perfect system for evaluating the outcomes we wanted to measure, but only if we spent all our time and all our money on doing it. Evaluation needs to be proportionate. So, like all evaluation methods, we need to use RCTs selectively and appropriately.
I find a lot of people who think that RCTs have to be massively complicated, prohibitively expensive and are only used by moral-lacking purveyors of the ‘dark arts’ of manipulation. And of course there are many (mostly private sector) firms that use trials to sell things they don’t want to unsuspecting people….or something like that. However I think we need to bust a few myths about RCTs here – at least based on my experience.
Whilst I accept that some RCTs are terribly expensive and horribly complicated, they don’t have to be. I know this from having run successful RCTs at a number of local authorities. It comes down to using the method appropriately.
We’ve found it most suitable to test small changes to communications and messaging – to see which variation works best. That is very different to evaluating whole programmes where you have to track people over long periods of time to see how they behave. That way lies complexity and expense. But if we want people to respond in particular ways – whether it’s signing up to take part in an initiative, or to respond to a specific invitation or request in a particular way (behaviour change) then that can work.
I would go so far to say that with ever advancing technology we now have the opportunity to run RCTs at a lower cost and more simply than many other evaluation methods (after a bit of expense on the initial set up).
Scale is an issue – and clearly local authorities have a natural advantage over most charities and community groups in the size of their operation (particularly with universal services). But I see this as an opportunity to encourage collaboration, sharing and ultimately drive up standards across the VCS, by working together to evaluate interventions by using RCTs (that also helps to address the issues around generalisation).
Another concern people often express when I talk to them about using RCTs is that the process is depriving people of something. Of course that is true – but we do that all the time when we pilot new approaches, without batting an eyelid. How is it different? If we knew something would work we would do it and not run pilots or prototype new approaches. We do them because we don’t know but we want to find out – and by using an RCT we can be more confident that the results we observe are down to our actions, not because of any other factors that might make the pilot area or group different to another group.
RCTs do require a degree of technical expertise, that’s true, but it doesn’t mean needing to do a Masters or a PhD in experimental methods (though if you want to go, I can recommend courses that Professor Peter John runs at UCL). There’s a lot you can learn from reading the resources out there, or support that is available. And inevitably, as the use of RCTs grows, so too will the support available to run them – and I’m very happy to share my experience with anyone who’s interested!
Just because they are new and we have to learn how to use them appropriately – much like any other innovation – doesn’t mean we should write them off as being too difficult to bother with. Aiming high and believing things are possible however improbable they might seem is a hallmark of the VCS and one which can be applied to evaluation as much as anything else.