Jonah Lehrer had a piece in The New Yorker in December (“The Truth Wears Off: Is There Something Wrong with the Scientific Method?”) on a seemingly dramatic recent decline in replicability of many scientific studies, particularly in ecology and psychology.

Lehrer explored a few explanations — selective reporting, journal bias toward “pathbreaking” studies and positive data, regression to the mean — and found them all wanting. The consequences of his reportage seem to throw the scientific project into doubt: What do we do when rigorously validated findings can no longer be proved?

I put the study to five Conservancy scientists and asked them: Have you seen this trend in conservation science? What’s your explanation for what Lehrer is reporting — do you even buy that it’s happening? If you buy it, what are the consequences (if any) for conservation science and conservation practice?

Below, Conservancy scientists Rob McDonald, Doria Gordon and Joe Fargione respond to Lehrer (Jonathan Hoekstra and Jensen Montambault responded in Part I):


Rob McDonald: “I Want to Thank Lehrer for Writing It”

Lehrer raises some classic issues, all of which were discussed when I was in graduate school. I had a whole course in the importance of proper experimental design — in particular, on selecting a representative sample and insuring that there is a well-posed hypothesis that is being tested fairly. Even his more esoteric philosophical ideas — like how paradigms sometimes shape which questions scientists explore, or how selective reporting of results by scientists biases the overall scientific literature — are common fodder for graduate seminars.

Instead of dissecting Lehrer’s argument, I want to thank him for writing it. Facts have meaning to scientists, and while it may hurt our pride a bit to hear these issues discussed publicly, it is healthy for the scientific method. Any fair criticism of the scientific method that aims to make our methodology and practices more accurate is, after all, part of the scientific method, part of the quest to learn something empirical true about the world. Unlike most claimed sources of “knowledge,” science has no need to rely on received wisdom. If the scientists Lehrer cites in his article have a point about the severity of this issue in the sciences, then let’s all work to solve the problem by doing better science.

What pisses me off is the over-the-top headline, questioning whether “there’s something wrong with the scientific method?” The Atlantic Monthly had another article recently on the same topic, interviewing the same scientists, and titled it “Lies, Damned Lies, and Medical Science” (David Freedman, November 2010). In reality, both pieces are somewhat nuanced looks at how the scientific method could become more accurate. I suppose editors want to sell magazines, and there is something to be said for a pithy title. What is dangerous, though, is that there are many politicians lining up to attack scientists, particularly on the issue of climate change research, and they will use these headlines to suggest that there is no difference at all between science and personal opinion. I want to see Lehrer and Freedman, when this inevitably happens, forcefully tell these politicians how wrong they are. Lehrer’s blog piece defending climate science is a good step in that direction.

Churchill once said that democracy is “the worst form of government except all the others that have been tried.” The scientific method is perhaps the worst form of discovering empirical truth about the world…except for all the other methods humanity has devised. The Atlantic Monthly article noted that one scientist’s review of medical research suggests that 80 percent of observational studies are proven wrong, as are 25 percent of randomized trials and 10 percent of large randomized trials. This clearly indicates the scientific method can get better; but it is also evidence for why science works — the properly designed, statistically rigorous experiments are much more accurate than the observational studies. What, do you suppose, is the error rate of politicians when they are making forecasts? Or editorial writers when they are pontificating? In 2002, 77 percent of the U.S. Senate voted to pass the resolution allowing the invasion of Iraq, chiefly because there was a widespread belief at the time in the existence of Iraqi weapons of mass destruction that have yet to be found. Can we expect Lehrer soon to write an article asking: “Is there something wrong with political rhetoric?”

Rob McDonald, vanguard scientist, Analysis Team, The Nature Conservancy


Doria Gordon: “It’s an Additional Argument that Measures Are Critical to Our Work”

I haven’t noticed this trend, but it’s likely because there’s so little replication of ecological/conservation experiments. I certainly agree that one can inadvertently bias experimental design and analysis, and that it probably happens more frequently than we’d all like to believe. It’s also clear that hypotheses are often refined as results come in, but still presented as if they were developed a priori without modification. But bias is more likely to explain incorrect conclusions than diminishing ones, so I don’t know if that’s the explanation for this phenomenon.

The consequences of this pattern are likely to be more immediately important for fields like medicine, environmental toxicology, etc. than for the majority of conservation science. If the relationship holds, but the results diminish over time, many of the conclusions are still valid, but the effect is less strong than originally understood. That might still be fine unless there are significant trade-offs of concern (in undesirable side effects, for example). I’ll be interested to hear if others suggest that we should alter our approach, which is already so fraught with subjectivity, context and assumption (because of timing, data availability, dependence on human interactions, etc.) that it’s hard to imagine that this finding of diminishing effect size would have a really significant impact. But I think the degree to which we should incorporate this additional uncertainty into our thinking again would depend on how much a given strategy is based on a specific scientific relationship, the risk, investment, etc.

The article certainly supports that we evaluate the effects of our strategies in place, rather than assuming that effects documented elsewhere will hold in a new location, under different conditions — much less the same location and conditions! So I see it as an additional argument that measures are critical to our work. I am choosing not to accept the alternative conclusion — that science is inconsistent and irrelevant and therefore not worth pursuing — I just can’t believe that.

Doria Gordon, director of conservation science, The Nature Conservancy in Florida


Joe Fargione: “The Cure for Bad Science Is More Science.”

1. Is there a real issue here? Yes. Scientific conclusions from one study are subsequently overturned more often than would be ideal.

2. Is it a problem? Yes and no. If you’ve taken a drug or invested in a conservation strategy that doesn’t actually work, that’s a problem. But it doesn’t necessarily mean that something is wrong with the scientific method. If something is real, it is repeatable. When subsequent studies disprove an earlier result, this shows that science works. Because science doesn’t prove (it only disproves), no result is ever written in stone. Science simply provides the best available explanation for observations recorded to date, subject to reevaluation when more data are received. It’s a laudable goal for scientists to try to get it right on the first try more often through a more skillful application of the scientific method. The cure for bad science is more science.

3. What is the cause? Actual scientific fraud is exceedingly rare, but biases are ubiquitous. Because journals and media attention favor novelty and significant results, scientists go to great lengths to achieve…novelty and significant results. The problem isn’t that scientists conduct studies designed to produce such results (scientific studies are boring enough already without scientists trying to conduct redundant and insignificant studies!), but rather that the results of these studies are selectively reported. The incentive for novel and significant results can also encourage scientists to over-interpret data when original results don’t provide significant findings, conducting statistical “fishing expeditions” to test for correlations between numerous variables. Chance alone dictates they will find some correlations. When these “significant” correlations are presented as if they indicated real, causal relationships, the literature becomes polluted with spurious claims. A third cause of inconsistency: context. Different populations or ecosystems respond differently to the same stimuli. When the importance of context is ignored, studies conducted in different settings can appear contradictory, even when both results accurately reflect real patterns.

4. What can we do about it? Journals can reduce publication bias by not using novelty and statistical significance as criteria for accepting manuscripts. This step is currently in practice, especially in online publications, which can afford to be more inclusive because they’re not limited by page counts. Some of these journals, such as PLoS One, have explicit policies that acceptance should be based on scientific merit, not novelty. In addition, as mentioned in the article, studies can be held to higher statistical standards (such as greater sample sizes) that reduce the occurrence of spurious results. Finally, meta-analyses are currently often limited to simply analyzing effect sizes, because other data are not available from studies. Smarter publicly accessible databases will allow meta-analyses that include more contextual data, allowing greater scientific insights (e.g. is the effect depending on total annual precipitation, or whether a patient is overweight?).

Joe Fargione, lead scientist, North America Region, The Nature Conservancy

(Image credit: mmatins/Flickr through a Creative Commons license.)

If you believe in the work we’re doing, please lend a hand.

Add a Comment