For more than 2,000 years, physicians used leeches to treat everything from hemorrhoids to headaches.
Few questioned medical leeches’ efficacy because authorities such the Greek physician Galen (AD 129–c. 216) used leeches, and many people had seen the sick recover after a good “leeching.”
Doctors did not discover that medical leeches hurt rather than help most ailments until they developed what statistician Howard Wainer calls “a reverence of empiricism.” (Wainer 2014).
How we know something is as important as what we know.
In conservation, we still rely primarily on “leech logic” for project design.
From strengthening protected areas to payments for watershed services, we think we know what works, but we have no evidence.
With Red Lists and global habitat prioritizations, we know empirically what we need to protect. But when it comes to conserving the lands and waters on which all life depends, like a 19th century doctor, we look to anecdote and experience for answers because we lack reliable evidence.
We are the first generation in human history to have the ability to use data to inform our decision-making. Rigorous data collection and measurement has revolutionized medicine and finance.
But conservation is only just beginning to develop rigorous measurements of our work.
Exhibit A is the “Open Standards for the Practice of Conservation.” The April 2013 Open Standards are testament to both how far conservation has come in the last decade and how far it has yet to go.
Twenty-five of the largest conservation organizations and funders supported Open Standards 3.0, and the Standards have become the accepted practice within the conservation community.
When developing a monitoring plan, the Open Standards call for methods that are “accurate, reliable, cost-effective, feasible and appropriate.” To which I could add valid, targeted, and a half dozen other adjectives that all describe the ideal monitoring method but fail to prescribe which monitoring method should be used where.
The Open Standards give a normative statement about what a monitoring method or tool ought to be. The tools themselves are missing.
I submit that, globally, the single most important tool for conservation monitoring is baseline data — i.e., data on carefully chosen indicators collected before a conservation action begins.
Walk with me through the logic.
If we agree that conservation organizations alone are never going to save the lands and waters on which all life depends, and if we agree that our conservation work needs to demonstrate replicable approaches that can be adopted by others, then our monitoring needs to show in the most credible way possible whether a project met its objectives and how it did so.
From medicine to education, the caliber of evidence it takes to put a replicable approach into the global marketplace of ideas is some form of experimental or quasi-experimental monitoring design (Shadish et al. 2002). It is the standard of proof without which conservation is just another arm-waver in the crowd.
Within conservation, experimental design is rarely possible, but ecology has a useful quasi-experimental design: BACI or Before-After, Control-Impact (Underwood 1994). Economists call this species of study a Difference-in-Differences design (Abadie 2003). Regardless of what it is called, it requires a benchmark, a baseline.
Baselines are a requirement for evidence-based conservation.
Without a baseline, it is impossible to measure what changes over the life of a project. Without a baseline, one cannot verify results. Without a baseline, attribution of cause and effect is impossible. In short, absent baselines, conservation will never develop rigorous evidence of results and will loose ground among key donors who demand evidence rather than anecdotes before investing.
Baseline data collection is not just about measuring results. A baseline can also help a project design team zero-in on priority actions. Baselines help define the eventual project evaluation because the methods used in the baseline are usually the same methods used in the endline evaluation.
Get the baseline right, and a project’s likelihood of success goes up, and the ability to measure the expected results is almost certain.
Our mission and our supporters deserve nothing less than a strong focus on evidence-based conservation, and the place to start is with baselines for every new project aimed at replication.
For the conservation community to best demonstrate its effectiveness we need more baselines as a first step.