In the medical field, randomized control trials (RCTs) are widely used to eliminate bias and demonstrate causality. Does a certain medication actually work, and will it work across all populations of people? To answer these questions, medicine has relied on RCTs for the past 50 years. But are RCTs an effective way to measure the success of conservation strategies? Does conservation need RCTs?
Craig Leisher, senior social scientist, advocates for RCTs, saying they can help show that conservation strategies actually benefit people. Eddie Game, conservation planning specialist, argues that RCTs are unrealistic and unnecessary for conservation.
We Need to Step Up to the Gold Standard of Impact Evaluations
By Craig Leisher, senior social scientist, The Nature Conservancy
Would you believe the industry-funded Tobacco Institute when it stated, “Causality has not been proven in any of the diseases and conditions linked statistically with cigarette smoking”? Probably not because we know it’s biased, and bias matters in science — just ask a climate scientist about World Wildlife Fund predictions on the melting of Himalayan glaciers.
Bias in a scientific study is often subtle yet can have an oversized impact on the results. A small bias, for example, in who participates in a study can skew the results. To reduce the potential for bias in a study, researchers often randomly select the study participants. Bias, however, is not the only subtle factor that can influence a study’s results. There are a multitude of potential external factors that can also muddle the results.
In the mid-20th century, the medical field developed an elegant study design that can address bias and external confounding factors: randomized control trials or RCTs. The way an RCT works is a group is randomly selected from a population and then randomly assigned to a “treatment” or a “control” group. The law of large numbers says the averages of the treatment and control groups will be similar to the overall population and to each other. This greatly reduces biases and all known and unknown confounding factors.
RCTs are the force behind most of the major advances in medicine in the past 50 years and are the gold standard of evidence in medicine, education and agriculture. But not yet in conservation.
Conservation today is where medicine was 50 years ago. We believe we know what works, but we don’t test our strategies in a rigorous way to see if this is so. If we keep our heads buried in the sand about rigorous impact evaluations, the evidence that conservation benefits not only nature but also people will remain elusive.
Using RCTs, The Nature Conservancy could answer big questions about social impacts and cost effectiveness, such as: Do community grazing management plans equitably benefit herders, or which fisheries management tools produce the most benefits to fishers? They see the environment sector as promising for RCTs given the scarcity of rigorous impact evaluations and the large resource flows.
Should we say “no” or should we say “yes” to the rigorous testing of our strategies?
It’s not an easy “yes.” There are valid concerns about RCTs, but they can be addressed. This is why the World Bank, USAID, AusAID, DFID, NORAD, and the Gates Foundation all support the use of RCTs to evaluate impact. To compete in the marketplace of strategies for improving people’s lives, conservation needs the rigorous evidence only RCTs can provide.
Why RCTs Are Not the Answer for Conservation
By Eddie Game, conservation planning specialist, The Nature Conservancy
RCTs are the gold-standard for demonstrating causality. This I do not dispute.
What I dispute is that RCTs are a critical evaluation tool that conservation should rush to apply. I doubt that:
a) RCTs are generally realistic in conservation;
b) The results from RCTs are generalizable in a useful way; and
c) This is the standard of evidence/evaluation expected by conservation funders.
The world is not a laboratory and it is neither possible nor ethical to control the development assistance that communities receive; you can only control what you do. To overcome this inevitable uncontrollability, RCTs rely on inclusion of a large number of replications.
Proponents of RCTs always site their impact in medicine. However, in order to ensure that treatment and control groups are statistically similar enough (because individuals vary in physiology and behavior), countries like Australia and the U.S. insist that treatments in clinical RCTs are repeated several thousand times!
This might be feasible for treatments whose unit of replication is a person or a household (say distributing bed nets or administering some drug), but we rarely replicate water funds, marine protected areas, grazing management, or any other conservation treatment at the scale of a household (and where we do, it usually means the landowners are wealthy and therefore unlikely to be the target of a livelihood-based project).
Ah, but surely we can ask hundreds of households in the same water fund catchment whether the project has improved their well-being? This is known as pseudoreplication, a common trap for conservation’s RCT advocates.
Pseudoreplication occurs when multiple samples from a single treatment unit are analyzed as if they were independent replicates and the results are then used to infer treatment effects. To use the apparently popular medical analogy, imagine you wanted to know the effect of building a hospital on community health. The treatment is the hospital, not the care individuals experience when they go there. Surveying lots of individuals in the community where the hospital was built helps increase our confidence in any trend we see, but to know the effect of the hospital you need to look at lots of communities with new hospitals. It is the same with conservation projects.
The random assignment of participants to treatment or control groups is also a problem. The reality is we do not choose communities to work with randomly because biodiversity and willingness to work with us are not distributed randomly. We very intentionally try to identify communities where there is interest and willingness to work with us, and therefore introduce a number of biases. We could do as they do in medical RCTs and ask communities to agree to participate and then only undertake the interventions in a randomly selected subset of these, but how much good faith would this burn through?
RCTs are about proving or disproving a causal relationship that is hypothesized and can subsequently be generalized. Conservation’s RCT supporters often claim we should use them to rigorously test our strategies so we can replicate them with confidence. However, conservation projects are not well suited to generalizable claims from RCTs. In a variation on the philosopher Nancy Cartwright’s critique of RCTs, what RCTs tell us is “it worked somewhere,” and yet we often read the results as “it will work for us.” Conservation outcomes depend on complex local interactions, and it can be difficult to distinguish between the role of the strategy and the group implementing it.
For example, in Melanesia there have been high-profile examples of conservation organizations catastrophically failing at the same intervention that others have implemented successfully. Social-ecological systems are dynamic, they mutate and change. Generality can be achieved through lots of RCTs on the same intervention, which for some strategies (say marine protected areas) might be possible, but where you have an intervention repeated so many times there are also other less burdensome statistical approaches to deriving generalities about impact.
Which brings me to my third and final point. If the results of RCTs are not generalizable and therefore only useful for evaluating the effectiveness of a particular project, their principal role is in reporting to funders and justifying the continuation of funding. I argue that these ends can usually be accomplished without the heavy financial or logistic burden of an RCT. Many (but certainly not all) donors rightly want evidence that the objectives you are trying to achieve are moving in the right direction.
Yes, knowledge that the strategies you undertook were the principal cause of this is ideal, but I have rarely seen it insisted upon — experience suggests most people are happy with a well-demonstrated trend in the right direction!
Data-driven monitoring and decision-making are critical for improving conservation, but there are more viable, less intensive tools than RCTs.