Timm Kroeger is senior environmental economist for The Nature Conservancy.
At its core, economics is about using scarce resources smartly to maximize the production of desired outcomes. Most human endeavors are constrained by available resources and their pursuit therefore has an economic dimension.
Conservation is no exception. So what does it mean to be smart about pursuing conservation objectives? It means employing the strategies and tools that yield the biggest conservation gain for our budget — in other words, maximizing the conservation “return” on our investments.
Unfortunately, figuring out what those gains are for each of the various approaches we might deploy to promote a particular conservation outcome generally is not a trivial undertaking. This task could be aided if systematic reviews were available for many conservation interventions that would provide an evidence base to draw on (Pullin and Knight, 2001).
Yet, for many interventions, that evidence base currently is less than solid, especially interventions that aim to increase the provision of ecosystem services.
Here, the challenge is to translate evidence on the effects of interventions on ecosystem functions into the resulting changes in ecosystem services that directly impact people’s wellbeing (Ringold et al., 2013).
For example, we know that riparian reforestation reduces nutrient inputs into surface waters from pasture and agricultural lands. But by how much does reforestation of specific lands in the watershed increase the quality of recreational swimming or fishing at the locations those activities occur downstream? How much does it reduce water treatment costs for the downstream utility?
Conducting rigorous experimental or quasi-experimental assessments for each and every project would be prohibitively expensive and inefficient. After all, whatever resources we spend on predicting and documenting the conservation outcomes of interventions are no longer available for actually implementing those interventions and producing those outcomes.
Effectiveness evaluation is itself subject to a return-on-investment calculus: We need to make sure that the additional resources invested in analysis generate “value of information” in the form of improved project design and resulting conservation outcomes that at least offsets the reduction in those outcomes caused by redirecting funds away from implementation and to evaluation.
In short, the goal of conservation planning cannot be to identify for each project the “perfect” intervention.
Despite these challenges, there is an urgent need to build the evidence base for many types of conservation interventions. We need a nuanced approach: focus on projects that most require evidence (Montambault, 2012) and are characterized by high risk and high leverage potential, and only where there is a clear plan for incorporating the findings into future planning (Montambault and Groves, 2009).
But don’t we already know that our interventions work?
For many interventions, the answer is yes. The question, however, is not so much whether something works, but rather how much or how well it works, and how much or well it would work at alternative sites. Only if we can answer these questions can we become smarter about selecting our intervention portfolio and improving intervention designs.
To return to the riparian example, reforestation reduces sediment loads from a parcel of land, but we need to know how much it does so on parcels with specific characteristics (i.e., land cover/use, slope, soil type and climate), and how those impacts travel downstream to the locations at which we care about sediment (say, fish spawning grounds, reservoirs, or municipal water intakes).
Crucially, the size of the conservation return on our investment depends on what would happen without the conservation action.
Reforestation of the riparian zone of a property currently in pasture will reduce stream sediment loads. But we cannot automatically assume that our effect on sediment loads is the difference between current loads and loads produced with a restored riparian forest in place. Reforestation might occur anyway for any number of reasons, or pasture might be converted to row crops or some other cover.
Correctly identifying the impact of an intervention on target outcomes thus requires accounting for changes in the other factors affecting those outcomes.
Such “counterfactual thinking” is critical for credible project evaluation and for building the evidence base for conservation and environmental policy in general (Ferraro, 2009).
Constructing quality counterfactuals is not easy nor always possible. But neither is it rocket science. Failing to account for the counterfactual — “the world without the project”— can lead to large biases in assessments of intervention effectiveness (Blackman, 2013).
The ability to demonstrate and quantify the effectiveness of conservation interventions will be crucial for mobilizing large-scale investments in natural infrastructure solutions. While those investments seek to increase particular ecosystem services flows rather than conservation per se, they nevertheless can yield substantial conservation benefits.
However, mobilizing those investments in many cases will require demonstrating a solid “business case” by showing that conservation produces desired gains in priority ecosystem services at lower cost than alternative solutions.
Take water funds, for example. Investors contributing to existing water funds in many cases appear to be motivated by a variety of reasons other than a clear expectation of net financial gain.
Many of those reasons — advancing environmental science and management; benefiting local communities and ecosystems; applying the precautionary principle — are laudable, and there will be other cases where these reasons provide a sufficient incentive for some investment by some private or public entities.
Yet, it seems reasonable to assume that vastly larger investments in watershed conservation in many more watersheds, not 10 or 20, but 1,000 or 2,000, might be unlocked if one could demonstrate that their returns would exceed those of alternative, conventional solutions.
Such demonstration requires evidence based on credible analysis similar in rigor to that demanded for conventional solutions. It also requires application of an appropriate analytical framework that guides the design of efficiently targeted monitoring, construction of counterfactual scenarios, and use of modeling and appropriate ecosystem services metrics (Higgins et al., 2013).
In short, to have a chance at unlocking those sorely-needed new funding streams for conservation, we need to move from intuition, anecdotes and qualitative assessments to quantitative proof of effectiveness. We need to identify where investment in creating an evidence base would yield the highest returns of target ecosystem services, and then start building that base.
Water funds have begun doing this, and so have a number of other high-profile strategies.
None of this is easy. Yet it is necessary, worthwhile, and urgent.
The massive projected infrastructure spending on climate change adaptation (Parry et al., 2009) will go entirely towards grey infrastructure in the absence of proof of the competitiveness of natural alternatives. And grey infrastructure is sunk-cost and long-lived — once it is in the ground, no amount of proof of the superior performance of natural alternatives will un-build it.
Opinions expressed on Cool Green Science and in any corresponding comments are the personal opinions of the original authors and do not necessarily reflect the views of The Nature Conservancy.
Blackman, A. 2013. Evaluating forest conservation policies in developing countries using remote sensing data: An introduction and practical guide. Forest Policy and Economics 34:1-16.
Ferraro, P.J. 2009. Counterfactual thinking and impact evaluation in environmental policy. New Directions for Evaluation 122:75–84.
Higgins, J., A. Zimmerling, A. Vogl, T. Kroeger, L. Bremer, P. Petry, C. Leisher, J. Nelson and H. Tallis. 2013. A Primer for Monitoring Water Funds. Arlington: The Nature Conservancy. Honey-Rosés, J., K. Baylis and M.I. Ramírez. 2011. A spatially-explicit estimate of avoided forest loss. Conservation Biology 25(5):1032-1043.
Montambault, J. 2012. Conservation’s smoking gun: Who bears the cost of making us ‘evidence- based’? Science Chronicles September 2012.
Montambault, J. and C. Groves. 2009. Improving conservation practice by investing in monitoring strategy effectiveness. Conservation Measures Working Paper No. 2
Parry, M., J. Lowe and C. Hanson. 2009. Overshoot, adapt and recover. Nature 458:1102-1103.
Pullin, A.S. and T.M. Knight. 2001. Effectiveness in conservation practice: pointers from medicine and public health. Conservation Biology 15:50–54.
Ringold, P.L., J. Boyd, D. Landers and M. Weber. 2013. What data should we collect? A framework for identifying indicators of ecosystem contributions to human well-being. Frontiers in Ecology and Environment 11(2):98–105.