In education research, we like to flagellate ourselves because practitioners don't use our research. It’s our duty to evangelize our findings, or so conference calls, keynote speakers, and guilty lumnaries like to tell us. We use insider jargon, write impenetrable prose, and publish in journals only other academics read, and it’s our fault if no one reads our stuff. To be sure, there is no excuse for bad prose, and we all ought to watch our use of technical mumbo-jumbo - although what outsiders, in any field, deride as "jargon" is more often in fact precision of meaning. But I want to push back against this argument for professors-as-popularizers a little bit, for two reasons.
First, on what basis have we decided that the job academics includes outreach to the general public? To the extent our job descriptions include “service,” this mostly means service to the institution and to the academic community. (Read a tenure file if you don’t believe me.) There simply isn’t much time left to service the practitioner public when it does occur, it's often volunteer work, not knowledge-sharin. We teach our students, to be sure, but that's quite a different proposition from convincing a random financial aid administrator in Wichita or vice-principal in Schenectady to read my book. Producing knowledge is a distinctly different skillset than popularizing it, and perhaps the problem here is our failure to encourage the latter. We need more people who are capable of translating research into accessible, useful information, but there is no reason those should necessarily be the same people who produce that research.
Second, we haven't asked ourselves whether practitioners want our research. Of course, some do, just as some academics can popularize their work. But most people believe much more firmly in their own experience than in anyone's research. My management students are singularly uninterested in any findings about group dynamics, human perception, or ways of managing employees that have been shown to be effective. “In my experience,” they begin, obliterating the lifetime work of dozens of scholars in favor of their own small corner of the world. My students accept that I can teach them certain technical facts, such as the difference between a sole proprietorship and an LLC, but anything that comes across as social science-y is a matter of “opinion.”
We all do this, of course, in areas we aren’t experts in; confirmation bias and perceptual filtering cause most facts to bounce right back off of us. It’s not just my students, but me, too. We read about the Milgram experiments and think, I would never have gone along with that. I would have refused to administer those shocks! Or we believe we have the willpower to resist propaganda, when we don’t even have the willpower to resist a bag of M&Ms.
The thing is, all the popularization in the world is but a minor force against the tides of our personal beliefs in experiences. Sometimes, research can shake us out of our complacency; taking Harvard’s implicit association tests can be a powerful experience. More often, we read it and either agree or disagree before forgetting about it entirely. We have to face up to it only when it becomes policy (witness the storm around the Common Core; I dare say most of its supporters and detractors have only a vague idea what it is actually about, let alone if there is any research to the efficacy of its content).
No, it’s mostly hubris that makes us assume the practitioner public wants our expertise in the first place.
Most of us aren’t good at reaching practitioners, and the assumption practitioners want to be reached is overblown. For those reasons, I’d like to see us as a field ease off on the self-flagellation for a while. This doesn’t mean that academics that are skilled at sharing their work should stop, but maybe everyone else can stop apologizing while continuing to avoid doing it?