What is “impact” for science, anyway?
And could the ways we define “impact” explain why we have less of it than we think we should?
Case de jour: PLoS ONE, the world’s largest scientific journal. Its 2012 “impact factor” (the most widely used measure of a journal’s scientific influence — calculated by the number of citations a year’s worth of its papers receive elsewhere, divided by the number of papers it published that year) dropped a whopping 16% from 2011’s number, it was announced earlier this month.
That magnitude of Journal Impact Factor (JIF) decline would be enough to make most scholarly publishers man the life rafts. Since publishing is so key to science, what makes up a scientific career — the hirings, promotions, tenure, grants — relies at least indirectly and many times directly on JIFs. So scientists need to publish in journals with relatively high impact factors…and hope those numbers don’t drop before they’re up for their next job.
If you’re a scientist, you’ve almost certainly at least peeked at a journal’s JIF before submitting to it; you might even get a cash bonus when you publish to a high JIF one such as Nature or Science. No wonder the annual announcement of every journal’s JIFs in June by Thompson Reuters, the official toter-up of the numbers, is followed in some circles like Selection Sunday for March Madness.
So it’s a safe bet no champagne was popped in the PLoS ONE offices when their new JIF was announced. But scientists — including conservation scientists — weren’t happy, either.
PLoS ONE — a free-to-read, online-only, fast-turnaround, data- and graphics-friendly science journal whose 2006 debut cut shook up a scholarly culture used to snooty editors, $1,000-and-up annual journal subscriptions and glacial manuscript reviews — has become a favorite submission destination for many researchers who have papers that could make a splash.
I recommend it to Conservancy scientists for articles that have color graphics and potential media impact. PLoS One does graphs and charts well, is followed closely by science media, and has a competent and aggressive media relations staff. In addition, it doesn’t cost much to publish there, and even less for developing country authors.
Unfortunately, PLoS ONE’s JIF will continue to drop because of the very way it does business, according to Phil Davis, a publishing analyst and contributor to the group blog Scholarly Kitchen.
Davis argues that PLoS One’s first strong JIF in 2009 (4.351) brought a slew of submissions in 2010 from researchers looking to capitalize on that number. And since PLoS ONE now publishes tens of thousands of those submissions annually (23,464 last year, to be precise), it doesn’t have the tight editorial selectivity of a Nature or a Science necessary to ensure it only selects potentially high-citation papers. So that Impact Factor will keep dropping, because any one high-impact paper will be lost in a sea of thousands.
In essence, Davis is saying, PLoS ONE will continue to be victimized by its early success and its all-comers philosophy. He predicts a decline in megajournals like PLoS ONE and a return to discipline-based journals that have sped up their review cycles and added altmetrics without corresponding declines in JIF.
Mother of jargon, is this the end of PLoS ONE?
A better question might be: Is this the end of JIF?
JIF’s value has been debated for years, but it’s never been under such systematic attack as now. A statement signed in May by more than 8,000 scholars and 300 institutions called the San Francisco Declaration on Research Assessment (DORA) calls for an end to using journal-based metrics such as JIF in decisions about scientific funding, appointments and promotions.
But here’s the shock: Now, there’s a growing body of evidence that JIF isn’t a particularly good measure of impact — and might even contraindicate it.
As George Lozano writes for the London School of Economics blog, the strength of the relationship between a journal’s Impact Factor and any one of its individual paper’s citations rates has been dropping since 1990 — when the Internet began untethering papers from journals and search made journal provenance largely moot.
And get this: Lozano says that, since 1991, the proportion of top papers not published in top JIF journals is increasing.
“If the pattern continues,” he writes, “the usefulness of the IF will continue to decline, which will have profound implications for science and science publishing. For instance, in their effort to attract high-quality papers, journals might have to shift their attention away from their IFs and instead focus on other issues, such as increasing online availability, decreasing publication costs while improving post-acceptance production assistance, and ensuring a fast, fair and professional review process.”
Bjorn Brembs, Katherine Button and Marcus Munafo top Lozano, with a long takeout on JIF in Frontiers in Human Neuroscience that blames journal rank for everything from the rise in scientific retractions and the decline effect to the unwillingness of many publishers to make their journals open-access or to cut subscription prices.
And if that evil list isn’t enough, ranking journals is just bad scientific practice, they argue: “Much like dowsing, homeopathy or astrology, journal rank seems to appeal to subjective impressions of certain effects, but these effects disappear as soon as they are subjected to scientific scrutiny.”
Think about that before you go back to the Journal of Obscurity and Editorial Neglect.
As I do a dozen or more times each year, I worked recently with a scientist to develop a communications plan for one of her new papers. She chose to publish it in a specialty journal that had the practitioner readership she wanted for her work; but that journal was print-based and subscription only, with very limited online-first features that we had to pay $3,000 to secure.
The journal also had zero resources for media relations, so we had to generate any coverage for it ourselves. Worst of all, it took nine months between the time the paper had been accepted and publication — it felt as if single-celled organisms had evolved into mammals during the interim.
And communication from the journal’s editorial staff about when it finally would appear was non-existent. The paper went virtually uncovered by media. We’ll have to wait a few years to find out about scholarly impact. It’s fair to say the scientist was disappointed in the experience.
For my money, if you’re a conservationist worried about PLoS ONE’s JIF, or JIF at all, that’s a sign you have bigger issues to worry about.
PLoS ONE is still a premier journal for communicating your work — especially interdisciplinary and media-worthy work — as opposed to using it to get validated. Conservation science has enough trouble getting attention; we don’t need to place imaginary boulders in the pathways we have.
Other interesting recent links on JIF and scientific impact:
Paul Wouter at The Citation Culture says there are a lot of better journal impact indicators than JIF, but that they shouldn’t rule the world, either.
What would a metric that evaluated the impact of research not just on science, but on practice and society look like? Birge Wolf and co-authors take a crack at it in the latest issue of GAIA.
There’s also always social media if you want to really make an impact.
Correction, 2.10.2014: The sentence “And get this: Lozano says that, since 1991, the proportion of top papers not published in top JIF journals is declining” should have read “…is increasing.” It has been corrected.