Matt Miller and I were in Madison, Wisconsin this week, running a science communications workshop for Conservancy and partner staff.
I began as I usually do — touting the importance of knowing your audience. Who are they? What are their biases, their values, their language, their issues?
“That sounds great,” a participant finally said, raising his hand. “But how do we figure out all that out?”
Good question. We can’t all hire social scientists to do audience analyses for us. (Wouldn’t that be nice?)
But is it really so hard to know your audiences?
Don’t we already all have some hypotheses that we can test?
Because the alternative, of course, is to revert to the deficit model of science communications — just dumping your findings out there and hoping that changes something. An abundance of research that says this approach doesn’t work, and many science communicators and scientists have gotten religion on the issue, calling for alternatives.
Unfortunately, despite all the handwringing about the deficit model, mainstream science communications practice still largely clings to it: paper → press release → media and blog outreach → count the hits → move on to the next paper. It’s easy, it’s what we know, and we don’t have that social science to help us work smarter.
It’s become fashionable among science communicators to call for an applied science of science communication to guide practitioners — a field that would create a playbook on what works for various audiences in which situations.
I’m late catching up to this Ars Technica article from August by John Timmer, which makes just that argument. Timmer is one of the outstanding science writers working, and I’d be the last one to argue against gathering a body of research we could use. But I find Timmer’s take on the deficit model fascinating and revealing:
I’m a full-time science communicator, and the crusade against the deficit approach creates all sorts of problems for me. To begin with, my job is largely to convey basic facts. I also try to provide context and a degree of analysis where appropriate, but it’s ultimately the latest research findings that make up the majority of our science content. More generally, I’m not attempting to resonate with any particular cultural group, unless you consider an interest in the natural world a cultural affinity. So I’m often told that my job goes against the best practices of science communication.
I’m not the only one who faces that problem. If a grad student wants to describe their research to the public, they’re not going to try to resonate culturally with anyone; instead, they need skills in distilling complex science down to things that are easy to grasp. In contrast, someone advocating for science-based policies doesn’t necessarily need to the ability to distill complex science but will need to know how to use culturally resonant phrasing if they want to get everyone on board.
Unfortunately, all too often, different types of communicators are getting this one-size-fits-all advice. Sure, the deficit model is wrong, but does that matter to you? If you are planning on talking about your own research, how do you find out what’s effective?
So the deficit model is still OK, Timmer seems to be saying, because many scientists just need to “describe their research to the public” and “convey basic facts,” and we’ll have to wait for that applied science field to coalesce and tell us “what’s effective,” anyway.
I think that’s wrong on several counts.
First: There is no “the public.” That’s a mythic animal, a pretend audience — and one that absolves us of the responsibility of dialogue.
Second: The job of a science communicator — and, I would argue, of an applied scientist — isn’t “largely to convey basic facts.” That’s a journalist’s conceit: I write, and others read. (It’s also a bad business model for journalism, but that’s another post on another blog.)
The job of a science communicator is to convey the science in such a way so that it can be of maximum use, and so that you’ve maximized the chances that specific actors respond to the information in the ways you desire.
That dictates understanding who might use it and how. Analyzing the audiences and mapping out their potential responses.
In an attention economy, if you don’t address these questions at a pretty granular level, you’re wasting your time.
Third: Many scientists — especially applied scientists — know their audiences better than we and they give themselves credit for.
Back to the Wisconsin training. My answer to the question “how do we figure out all this stuff about our audiences?” was to put the participants through the Message Box, the technique developed by Nancy Baron to distill complicated science down to core messages and make them relevant to the audience with which you’re speaking.
The scientists took something they wanted to develop an elevator speech for — a paper, an idea, a conservation practice, their jobs — and broke it down into the Problem/So What?/Solution/Benefits framework of the Message Box.
Then I made them pair off and deliver their Message Boxes to each other in decreasing amounts of time, starting with 2 minutes. (We got down to 1:15 by the end — terrifying. It’s “speed dating for scientists.”)
For the first pairing, they could choose their own audience. For the second and third pairings, I called out audiences for them — “senior manager,” “government official,” “ordinary citizen who knows nothing about what you do.”
This on-the-fly audience switching, participants said afterwards, was one of the most valuable parts of the exercise. “It got me out of the comfort zone I had with my messages and forced me to think about what would really work with that audience,” one said.
In other words: The scientists already had a lot of hypotheses about their audiences. Now, they had the tools and willingness to test those assumptions in real-life situations.
I couldn’t ask for more as a science communicator. We need to cultivate and build on that dynamic.