Jonah Lehrer had a piece in The New Yorker in December (“The Truth Wears Off: Is There Something Wrong with the Scientific Method?”) on a seemingly dramatic recent decline in replicability of many scientific studies, particularly in ecology and psychology.
Lehrer explored a few explanations — selective reporting, journal bias toward “pathbreaking” studies and positive data, regression to the mean — and found them all wanting. The consequences of his reportage seem to throw the scientific project into doubt: What do we do when rigorously validated findings can no longer be proved?
I put the study to five Conservancy scientists and asked them: Have you seen this trend in conservation science? What’s your explanation for what Lehrer is reporting — do you even buy that it’s happening? If you buy it, what are the consequences (if any) for conservation science and conservation practice?
Below, Conservancy scientists Jonathan Hoekstra and Jensen Montambault respond to Lehrer (future installments will include the other responses):
Jonathan Hoekstra: “It’s a Reminder to Remain Skeptical”
I have not measured — or seen measurements of — the decline effect in conservation, but I have certainly seen ideas rise and fall.
I think conservation is particularly susceptible to two effects, one that Lehrer touched on and one that he did not. The first (which Lehrer discusses) would fall under the bias for positive results. Conservation science has a purpose, and so it may be susceptible to people wanting to solve a problem and then seeking the most obvious evidence — evidence that will probably exhibit a large effect (I agree with Lehrer that this is not the same as scientific fraud).
The other explanation I find compelling is the popularity effect. This was not discussed by Lehrer but is featured here: http://is.gd/jIqK9. In my experience, it goes something like this — someone observes something interesting and writes it up as a possibly meaningful new insight (even though it might just be an idiosyncratic phenomenon). Others are intrigued and search for the same in their work. The idea becomes popular in the literature and at scientific meetings. Anecdotal evidence accumulates (conservation too rarely involves real experimental or even structured observational studies). The real importance of this idea is less than the hoopla might suggest and people either get bored of it, or they are drawn away by the next shiny idea.
That gets me to your last question. What Lehrer did not discuss (but what really matters) is: “Does the decline effect matter?” Lehrer insinuates that it does — that’s what makes his story sufficiently provocative to win the attention of The New Yorker audience. But I’m not entirely convinced. For example, Lehrer describes an 80 percent decline in the effect size of fluctuating asymmetry on male reproductive success. Well, it turns out that those effect sizes went from small to smaller. Even on the high side, fluctuating asymmetry only explained about 10 percent of the variation in reproductive success. That’s actually pretty good for an evolution study (hence publication and lots of academic attention), but it doesn’t say anything about the factors that explain the other 90+ percent of variation in reproductive success. Lehrer also referenced a meta-analysis by Jennions showing significant declines in measured effects over time. Jennions’ findings, while highly significant statistically, were similarly small in terms of effect size.
To me, the decline effect only matters if it would steer conservation action in a different direction. If the size of an effect declines over time (or if our statistical confidence that it is something greater than zero falls below 95 percent), but would still motivate the same conservation action, then it only matters in an academic sense. I didn’t see evidence in Lehrer’s story that effects reversed sign from positive to negative; that would be troubling because it would suggest that the science could be wrong and subsequent action misguided. Fortunately, science tends to self-correct over time. That is probably the most important take-away for me of Lehrer’s piece — it is a reminder to remain skeptical, and to continually investigate what we think we know, so that science can self-correct and so that conservation does not get lured astray.
—Jonathan Hoekstra, director of conservation science, The Nature Conservancy in Washington
Jensen Montambault: “If You Can’t Question Science, You Can’t Do It.”
I was so excited this was in The New Yorker. I think that the piece reflects a pendulum shift that’s happened in science. In the 1980s, in ecology and other disciplines, we got very into reductionism — if we have a precise hypothesis, we can figure out how the world works. In part, that was a reaction to natural history. Now we’re saying: Wait a minute, insisting P<0.05=significant is arbitrary. Most good ecological statisticians will tell you that — if you really want to know what’s going on with your system, you have to look at more than just “What’s a statistically significant result?”
From a personal standpoint, I’m thinking about when I worked on birds on the Turks and Caicos Islands in the Caribbean. There are these endemic birds, Bahama Mockingbird, Cuban Crow, the islands’ own sub-species of Thick-billed Vireo. They’re cute but not “charismatic,” so no one’s really studied them that well. And they are a little on the rare side, so when you’re in the field, you don’t have most ideal data set to work with.
When I went out and did the survey the way you’re supposed to, I got interesting significant results, but not as interesting as combining those results with casual observations of the species and how the villagers and tourists were interacting with them. There’s a whole lot more in helping to decide where or whether to put a protected area than just identifying if there’s a habitat in crisis. Critical thinking and reevaluating your question is to me what science is, more than just the numbers.
That’s why I think this piece is fantastic, even if some people worry about making the public doubt science. We may not want to make people feel insecure about science — but if you can’t question science, you can’t do it.
—Jensen Montambault, conservation measures specialist, The Nature Conservancy
(Image credit: …-Wink-…/Flickr through a Creative Commons license.)