An Oxford Ph.D. candidate named Nathan Cofnas makes that claim in an article recently published by Quillette. I think he’s right, and I think what he has to say is very important. Here’s an excerpt:

In the last few weeks, the leading social media companies have doubled down on their strategy of deplatforming people and censoring content. Alt-right accounts are disappearing from Twitter, videos on controversial topics are being deleted from YouTube, and even some politically moderate YouTube streamers/content creators who didn’t violate the terms of service are being demonetized in an effort to drive them away. But deplatforming won’t … accomplish the noble goals that these companies say is motiving them: to prevent violence and the spread of socially destructive misinformation. If these are their goals then deplatforming will backfire—and already has backfired. …

Banning people from social media doesn’t make them change their minds. In fact, it makes them less likely to change their minds. It makes them more alienated from mainstream society, and … it drives them to create alternative communities where the views that got them banned are only reinforced.

Banning people for expressing controversial ideas also denies them the opportunity to be challenged. People with extremist or non-mainstream opinions are often written off as deranged monsters who could not possibly respond to rational argument. There are, of course, some neo-Nazis, Holocaust deniers, and the like, who conform to this cartoonish stereotype. … But not everyone outside the mainstream, and not everyone who falls for a misguided conspiracy theory, deserves to be written off. People do sometimes change their minds in response to reason. If they didn’t there would be no point in debating anything. …

Cofnas goes on to discuss three specific reasons why deplatforming can backfire:

Firstly, banning people or censoring content can draw attention to the very person or ideas you’re trying to suppress. …

Secondly, even when banning someone reduces his audience, it can, at the same time, strengthen the audience that remains. …

 Thirdly, any kind of censorship can create an aura of conspiracy that makes forbidden ideas attractive.

Cofnas provides compelling examples for each of these reasons. Then, by way of a conclusion, he adds what I regard as the most important point in the article:

An even more fundamental reason why social media companies shouldn’t try to suppress controversial ideas is that they are very bad at determining who and what is wrong or dangerous. There will inevitably be many false positives and false negatives. A major consequence of deplatforming and censorship will simply be to introduce chaos into our political discussions. Nothing good can come of that.

John Stuart Mill made the same point 150 years ago in On Liberty:

The peculiar evil of silencing the expression of an opinion is, that it is robbing the human race; posterity as well as the existing generation; those who dissent from the opinion, still more than those who hold it. If the opinion is right, they are deprived of the opportunity of exchanging error for truth: if wrong, they lose, what is almost as great a benefit, the clearer perception and livelier impression of truth, produced by its collision with error.

Elaborating on the first possibility, Mill added:

The opinion which it is attempted to suppress by authority may possibly be true. Those who desire to suppress it, of course deny its truth; but they are not infallible. They have no authority to decide the question for all mankind, and exclude every other person from the means of judging. To refuse a hearing to an opinion, because they are sure that it is false, is to assume that their certainty is the same thing as absolute certainty.

As Mill goes on to explain at length, and as subsequent generations of epistemologists have repeatedly shown, absolute certainty is not an option. On the contrary, fallibility across the board is an inescapable part of the human condition. We can never know for certain whether a factual assertion is true or false.

We can never know for certain what constitutes the good or how best to promote it. That, in my view, is the main reason why the content screeners at Twitter and YouTube can’t be trusted to decide which ideas are acceptable and which are not. It’s not just that they are politically biased; it’s that they—and the algorithms they develop—are fallible.