by Jon Sanders
Director of the Center for Food, Power, and Life, Research Editor, John Locke Foundation
Thirteen years ago, Alex Jones introduced me to his radio audience with the implication that I was “CIA funded.”
I was there to defend my May 2005 “Course of the Month” column for what is now the Martin Center. A political science professor’s syllabus had included Jones’ web sites among other sites as part of the “daily readings” for her course on “911 The Road to Tyranny” (a title she had even taken from a Jones video), and I had written a column telling what I read on all those sites one day — including stuff about alien abductions, UFOs, Israel’s “white slave trade,” the “staged” election of 2004, the 9/11 “hoax,” etc.
Jones objected to what, in his view, was my choice (rather than the professor’s) to lump his sites in with the others. He assumed I did that for the purpose of making him look bad:
Despite my explanations that the main focus of the article obviously wasn’t him or his site, Jones continued to press the point that I was attempting to tar him by association with websites that write about fluoride and UFOs and the like.
I did ask him why that day his site had an article headlined “Bigfoot caught on video?” to which he responded in a somewhat calmer tone that sometimes his site carries articles from the popular press (and this was from WorldNetDaily, I think).
For several weeks afterwards, I was subjected to bizarre, abusive e-mails from his audience. Even so, it never occurred to me nor would it to seek to silence him or them. As I have long held, you combat bad speech with better speech.
I know that the tyranny of silencing bad speech inevitably silences any speech that threatens the powerful. I would rather let the weeds grow up with the wheat than burn the whole field.
David French writes in The New York Times about the social-media platforms’ coordinated action this week to remove Alex Jones and Infowars from their sites. French begins by acknowledging the obvious:
First, Alex Jones is a loathsome conspiracy theorist who generates loathsome content. Second, there is no First Amendment violation when a private company chooses to boot anyone off a private platform. Third, it seems reasonably clear that Mr. Jones’s content isn’t just morally repugnant, it’s also legally problematic. He makes wild, false claims that may well cross the line into libel and slander.
He then proceeds to make the bigger point:
There are reasons to be deeply concerned that the tech companies banned Alex Jones. In short, the problem isn’t exactly what they did, it’s why they did it.
Rather than applying objective standards that resonate with American law and American traditions of respect for free speech and the marketplace of ideas, the companies applied subjective standards that are subject to considerable abuse. … These policies sound good on first reading, but they are extraordinarily vague.
French lists a few examples of how such anti-“hate” etc. policies are readily subject to abuse and wildly disparate interpretation based on politics. And he makes this case in The New York Times, which is offering a crash course in the same when it comes to new hires and gratuitously offensive social media habits (Sarah Jeong, defended; Quinn Norton, fired in six hours).
French’s solution to the problem would reconcile the private companies’ rights with users’ expectations of free speech on their platforms (which were deliberately encouraged by those companies):
The good news is that tech companies don’t have to rely on vague, malleable and hotly contested definitions of hate speech to deal with conspiracy theorists like Mr. Jones. The far better option would be to prohibit libel or slander on their platforms.
To be sure, this would tie their hands more: Unlike “hate speech,” libel and slander have legal meanings. … It’s a high bar. But it’s a bar that respects the marketplace of ideas, avoids the politically charged battle over ever-shifting norms in language and culture and provides protection for aggrieved parties. Nor do tech companies have to wait for sometimes yearslong legal processes to work themselves out. They can use their greater degree of freedom to conduct their own investigations. Those investigations would rightly be based on concrete legal standards, not wholly subjective measures of offensiveness.
In sum, French writes, “When creating a true marketplace of ideas, why not let the First Amendment be your guide?”
This solution hinges on a rather big “if,” as I see it: “if companies like Facebook are eager to navigate speech controversies in good faith.”
We can only hope they are or, if they aren’t, that the market finds freer alternatives quickly.